Using conversational action for use cases related to smart home, when smart home action does not have the required device type - actions-on-google

I want my smart device to be controlled by Google assistant. I have checked the smart home action guide, but the required device type is not available in smart home action.
I need to add customization to my voice commands as well, so smart home action is not option for my requiremnts.
Is conversational action a good choice to implement smart home. The google action doc mentions the use cases for which conversational action can be used.
Smart home use cases seem to be not fitting in the above mentioned use cases, as it will take time to process the request.
Is there any other option to implement this ?

While selecting a device type to integrate with google, the device type trait provides many options for combinations for you to pick the closest device type you want with all the required traits.
NOTE - Conversational actions are currently deprecated and will be fully shut down on June 13, 2023. For more information, visit https://developers.google.com/assistant/ca-sunset.

Related

How to invoke Google Assistant via Button

all:
We want to enable Google Assistant with custom actions via the button, not the voice input (keyword).
For example, usually, we enable Google Assistant with word "Hello, Google, show me the weather.". But within our production, we want to press one specific button, and then it could send the sentence above out to Google Assistant directly.
But we can't find any APIs to support this requirement. And we heard that Google plan to support hard-key method since Samsung make good experiences on S8
Do anyone help us to fix this gap?
Thank you!
You could use an Action link without any additional parameters specified to trigger the MAIN intent, or specify the custom intent you'd like to trigger.

Differences between smart-home-nodejs and actions-on-google-nodejs

What's the different between smart-home-nodejs (https://github.com/actions-on-google/smart-home-nodejs) and actions-on-google-nodejs (https://github.com/actions-on-google/actions-on-google-nodejs) handlers for Smart Home intents?
Which method should I use to create a Smart Home Application?
The actions-on-google-nodejs is a library for Node.js which simplifies the work you need to take in developing actions, including smart home actions.
The smart-home-nodejs is a sample project showing you one way to get started quickly with the smart home vertical. It does not use the actions-on-google library, as it was put together before the library supported smart home, and that's an outstanding feature request.
For an example of smart home that does use the library, you can check out the smart home codelab.

Is it possible to implement custom intents in Google Smart Home?

I successfully created a basic application which turns the lights in a certain room on or off. This uses the built-in intent 'turn everything in living room on/off'.
Is it possible to implement custom intents? Let's say I want to implement the intent 'put the lights in party mode', an intent which is not covered by the built-in intents. How can I do this? Can I route Google Smart Home services to Dialogflow intents?
Custom intents as a concept are not supported in the smart home integration for the Google Assistant. However, if you want to have specific feature sets like "party mode", you can create a "SCENE" device in your SYNC response that has the Scene trait. When the user says something like "activate party mode", your cloud integration will get an EXECUTE intent for that scene, which you can handle in any way.

Where are google home cards displayed if not using smartphone?

On amazon alexa, cards are displayed in the amazon alexa app or on the screen of an echo show ?). If I call my google actions on my smartphone, I am also able to view the cards. But what happens if I use a different non-screen surface, like the google home? Do the cards appear in the google home app anywhere or do they just get lost?
Cards (and other visual elements you can add) aren't shown if the surface you're currently interacting with doesn't support them. This is intentional since the user may not expect them there and might open the app later and be surprised.
You can always check what surfaces are being supported in your current conversation by using app.getAvailableSurfaces() or the equivalent JSON properties. If you need to show the user something, you can prompt them to change to a surface that supports display by using app.askForNewSurface(). See the documentation about Surface Capabilities for detailed information.
In general, it is a good design to expect the user to only interact with their voice and to require visual information only minimally. Visual information should be used to supplement and enhance the voice as much as possible.

Google Maps and Siri implementation

Just wondering, can we integrate Google Maps and Siri together. For Example:
I ask Siri, "show nearest Starbucks" and Siri will open the Map app or Google Maps and show the nearest Starbucks on the map.
Or
I ask Siri, "show me all Apple Stores". Siri will open the map and show all the locations of Apple Stores on the map.
Is this doable?
I havn't found any good tutorial,documentation to study more about Siri implementation, apart from articles only. There is no technical documentation/API available?
This isn't directly supported right now, without writing a third party website that Siri can hook into. According to the below linked documentation, from Apple's website, Siri on iOS 6 will support this functionality in at least limited part:
http://www.apple.com/ios/ios6/siri/
Eyes Free
Apple is working with car manufacturers to integrate Siri into select voice control systems. Through the voice command button on your steering wheel, you’ll be able to ask Siri questions without taking your eyes off the road. To minimize distractions even more, your iOS device’s screen won’t light up. With the Eyes Free feature, ask Siri to call people, select and play music, hear and compose text messages, use Maps and get directions, read your notifications, find calendar information, add reminders, and more. It’s just another way Siri helps you get things done, even when you’re behind the wheel.
This encourages me to believe that they will also expose the API (because someone will ferret it out if it exists) to normal API consumers during the iOS 6 lifecycle, probably before iOS 6.1, or with that release.
It is not currently possible through any kind of API that Apple use. However, there are some third party APIs that you could use such as: https://www.ispeech.org/developers/iphone You'd have to use that and then pass on the returned data to the Google Maps API.
Although this approach won't be as intuitive as using Siri, since that is not currently possible, this is the best bet you have for the mean time.
Unfortunately, Apple has not opened Siri's API to developers yet making this task impossible. However, Apple will probably open it soon. If you just want to use it for personal use, check out SiriProxy (https://github.com/plamoni/SiriProxy). SiriProxy lets you do exactly what you asked; however, for it to work, you must be on your wifi network so it cannot be in one of your app. Good Luck!