Is it possible to give Google Assistant commands programmatically? For example, I'd like to be able to send a command as text "turn on the fan" and have GA react as if that was the spoken command. I would also accept sending a JSON request in whatever format needed (with device IDs or whatever the API needs).
My situation is I have a ceiling fan that is controlled by Google Assistant. I want to be able to control it programmatically. For example, some event happens and my code wants to turn the fan on. Is there any way my code can tell GA to turn on the fan?
I tried using the Google Assistant SDK. I can send it text like "what time is it?" and get back text and audio, eg "It is 11:00am". However, I have a test device called "washer" and if I send text "is the washer running?" I get back "Sorry, I didn't understand". If I speak the words into my phone, I get back "The washer is running".
Why can't the GA SDK interact with my device? The credentials I give to the GA SDK are the same I use for my SmartHomeApp that defines the "washer" device.
To do this, you can setup up a virtual Assistant device and then send commands to it.
Check out Assistant Relay, which is a service that sets up a virtual Assistant device and exposes a REST API so you can send text commands to it, as if they were spoken.
Per the Documentation:
Simply send Assistant Relay any query you would normally send Google
Assistant, and Assistant Relay will call the Assistant SDK an execute
your command.
Per the problem you are having with the Google Assistant SDK, I believe what you are trying to achieve is only possible with a device, be it physical or virtual and not by using the SDK directly.
There are a lot of firewall and security issues allowing each smart device to to connect to the Internet. To alleviate this problem, Google's design methodology uses a fulfillment device as a bridge to connect to the device locally from one of their devices.
You are locally, on your smartphone, hooking into Google Assistant.
The phone is the fulfillment glue for the "washer" device.
According to this page:
Google Home or Google Nest device is required to perform the
fulfillment.
Due to the portable nature of cell phones, it does not make sense to allow one to be used as the fulfillment device remotely, hence the local hook.
Related
I would like to issue a command to a google device remotely using google's api.
I have multiple google devices (Nest mini and Nest hub) that I would like to trigger a specific one by sending a request and issue a command with code.
ie.
Send request to Office's nest mini equivalent to OK Google, <something>
or
Send request to Kitchen's nest hub equivalent to OK Google, start 30 min timer
or
Send request to Living Room's nest hub equivalent to OK Google, initiate custom action
After the command is issue programmatically, the device would then behave as if the command was given in person.
I am new to Google's environment and I have not been able to find any docs that could potentially do this.
I have been experimenting with Google Smart Home and the protocol flow looks very clear for me. In summary:
action.devices.SYNC - sent by Google Smart Home to fulfillment service to find out the available devices
action.devices.EXECUTE - sent by Google Smart Home to fulfillment service to execute a certain action on a device
On the smartphone/tablet, the customer can place a device in a certain location. This allows him to ask questions such as Turn everything in my office off. Internally, Google Smart Home knows which devices are located in the office, and sends a action.devices.EXECUTE action for each device in the office subsequently, as explained above.
I am now wondering about the following: is it possible to retrieve the configured locations/rooms in the fulfillment service also? Is this information exposed and available to retrieve?
It is not possible to receive information about a user's home layout through the Home Graph API. When the user gives a command like "Turn everything in my office off", you may get several OnOff commands in your fulfillment, although you will have no way of knowing the original query.
I created an app on google home, and I want the app to work only if its MY voice who that asks to do the skills on my google home
How can i do that? Is this possible? I configured my google home at beginning for it recognize my voice, now how to make it mandatory for this app?
I'm trying to make my app secure because it's make banking operations by voice.
Each user request is sent to your application with a unique anonymous UserID. You will need to determine the UserID that belongs to your account (by looking at logs for your application to see which value is yours) and reject requests from other UserIDs.
Even better would be to setup a more proper Account Linking system.
Keep in mind, however, that the voice authentication system isn't perfect and there is a slight, but possible, way for others to duplicate the request - either by using a recording of your voice or by having a similar voice. Consider all the risks when designing such applications.
I developed a actions on google app which sends a rich response. Everything works fine in the Actions on Google simulator. Now I want to test it on my Google Home Mini but my rich responses are not told by the mini. I would like to ask if it is possible to send my rich response to the google home app? The home mini says something like "Ok, I found these hotels, look at the home app" and there are the rich responses?
You can't send users to the Home app, but you can direct them to the Assistant available through their phone. The process is roughly:
At some point in the conversation (decide what is best for you, but when you have results that require display is usually good, or if the user says something like "Show me" or "Send this to my phone"), determine if they are on a device with a screen or not. You do this by using the app.getSurfaceCapabilities() method or by looking at the JSON in the originalRequest.data.surface.capabilities property. If they're using a screen, you're all set. But if not...
Make sure they have a screen they can use. You'll do this by checking out the results from app.getAvailableSurfaces() or looking at the JSON in the (not fully documented) originalRequest.data.availableSurfaces array. If they don't have a screen, you'll need to figure out your best course of action. But if they do have a screen surface (such as their phone, currently) available...
You can request to transfer them to the new surface using the app.askForNewSurface() method, passing a message explaining why you want to do the switch, a message that will appear as a notification on the device, and what surface you need (the screen).
If the user approves, they'll get the notification on their mobile device (using that device's normal notification system). When they select the notification, the Assistant will open up and will send your Action an Event called actions_intent_NEW_SURFACE. You'll need to create an Intent that handles this Event and forwards it to your webhook.
Your webhook should confirm that it is on a useful surface, and then proceed with the conversation and send the results.
You can see more about handling different surfaces at https://developers.google.com/actions/assistant/surface-capabilities
Rich responses can appear on screen-only or audio and screen experiences.
They can contain the following components:
One or two simple responses (chat bubbles)
An optional basic card
Optional suggestion chips
An optional link-out chip
An option interface (list or carousel)
So you need to make sure that the text response is containing all the details for cases like voice only (e.g. Google home/mini/max).
However, if your users are using the assistant from a device with a screen, you can offer them a better experience with the rich responses (e.g. suggestion chips, links etc').
I am creating an app for google assistant which will collect data while a user plays a game and then send that data to a project database. The API I am using to sent the data (synapse) requires it to be in file format, however, I can't find a way to create a file for the data due to the nature of google assistant apps. Am I overlooking a way to do this/is there a way to get around this and send the data somewhere else to make it into file format? The data is stored in a JSON object.
The conversation that your users have with your Action will be relayed from their Assistant device (such as Google Home) to Google's servers, which do a little processing, and then to your server. Your server is then responsible for sending back a reply to Google's servers, which sends it on to the Assistant device. This is very similar to how a web browser and server work, and for good reason - your server accepts commands via a "webhook", which is just a fancy way of saying that Google's servers contact your server via HTTPS, and you're sending back a reply via HTTPS.
Your webhook can do anything - as long as it does it fast enough. You can store what command the person has issued and either aggregate a number of them into a file format to send, or send each one.
Your Action does not, itself, run on the user's device any more than a web page with a form "runs" on the user's device. It displays there, just like your Action is read out loud... but almost all interaction is sent back to you with minimal processing on the device itself.