Is there any list of sample utterances to the google smart home device types/traits? Since, google smarthome action device types/traits are pre-built, It's terrible not stating some sample utterances under each device type/trait in the google action developer documentation. Otherwise, developer has no clue what are supported utterances.
I'm not sure if there's a specific issue you're seeing, as there are sample utterances in the trait documentation, Brightness for example has sample utterances under the Examples heading.
As already update by Nick there are plenty of utterances available, but since it is smarthome as AI you can try and figure out the multiple utterances which are not available in the DOCS.
Cheers.
Related
I have integrated the trial play function provided by the HMS Core Game Service SDK 5.0.1.302 into my game and completed the following as instructed by the official documentation:
1.Applying to enable the forcible identity verification function.
2.Adding code snippet for implementing trial play to my game code.
However, when my game was launched and the identity verification pop-up was displayed, the trial play option was not available, which means that the function did not take effect. How can I make the function take effect?
The following are some of the prerequisites for enabling the trial play function:
The integrated Game Service SDK version must be 5.0.1 or later.
The forcible identity verification has been enabled.
Code relating to the trial play function has been added.
The default authorization parameter of the game must be DEFAULT_AUTH_REQUEST_PARAM_GAME.
For Details,check Docs.
As Shirley already pointed out, please double-check all those 4 prereqs, make sure they are all in place. One common mistake people usually made is as follows:
Please see if that falls into your category. If you still encounter errors, please refer to this link to find out more error code info.
When testing our AoG, we noticed that deep links currently no longer work properly on speakers (e.g. Google Home) and smart displays (e.g. Google Nest Hub). This behavior has been occurring for several days. Before that, everything was functioning normally. In contrast, deep links in the Google Assistant App still work normally.
Deep links like "OK Google, ask ActionName for abc" won't trigger the AoG but return the error message "The agent returned an empty TTS". Deep links like "OK Google, start ActionName and do abc" are working fine.
We tested this behavior in the Actions Console and on real devices like Google Home and Google Nest Hub.
Is there any way to fix this?
There may have been a bug at the time of posting the question. The bug has since been resolved and I'm unable to reproduce this error today.
I havent seen many posts here regarding the xbox live api offered by Microsoft. Can anyone point me a a site where there is an active xbox live api community?
This is rather vague so I will provide a broad answer.
ID#Xbox
If you're an independent publisher looking to implement Xbox Live Services into your game sign-up with ID#Xbox (https://www.xbox.com/en-US/developers/id). There you will be given access to developer forums and thorough documentation. Official and full of knowledgeable people.
Creators Program
If you're creating an app for the Xbox One marketplace or Windows 10 you can join the Xbox Live Creators Program (https://www.xbox.com/developers/creators-program). This is much more streamlined than ID#Xbox and has less verification steps. It's a quick way to integrate Xbox Live with whatever you are doing.
Documentation
For complete documentation on the Xbox Live API visit (https://learn.microsoft.com/en-us/gaming/xbox-live/) which is assuming you already have the necessary access through either ID#Xbox or the Creators Program.
The aforementioned avenues are the official way to tap in and join the community.
Other Options
Above all else, if you're simply looking to integrate Xbox Live into your website or something simple - Microsoft isn't interested. There is no official support for that unless you're one of the above partners. However - there remain options.
1) OpenXBL (https://xbl.io) offers up support for Xbox Live API calls and sign-in authentication. Plenty of demo projects to get you going. There is also a Discord channel available to chat with others using the service.
2) https://xboxapi.com also offers up support for Xbox Live API calls.
I’ve already built an Alexa skill, and now I want to make that available on Google Home. Do I have to start from scratch or can I reuse its code for Actions on Google?
Google Assistant works similar to Amazon Alexa, although there are a few differences.
For example, you don't create your language model inside the "Actions on Google" console. Most Google Action developers use DialogFlow (formerly API.AI), which is owned by Google and offers a deep integration. DialogFlow offered an import feature for Alexa Interaction models, which doesn't work anymore. Instead, you can take a look at this tutorial: Turn an Alexa Interaction Model into a Dialogflow Agent.
Although most of the work for developing voice apps is parsing JSON requests and returning JSON responses, the Actions on Google SDK works different compared to the Alexa SDK for Node.js.
To help people build cross-platform voice apps with only one code base, we developed Jovo, an open-source framework that is a little close to the Alexa SDK compare to Google Assistant. So if you consider porting your code over, take a look, I'm happy to help! You can find the repository here: https://github.com/jovotech/jovo-framework-nodejs
It is possible to manually convert your Alexa skill to work as an Assistant Action. Both a skill and an action have similar life cycles that involve accepting incoming HTTP requests and then responding with JSON payloads. The skill’s utterances and intents can be converted to an Action Package if you use the Actions SDK or can be configured in the API.ai web GUI. The skill’s handler function can be modified to use the Actions incoming JSON request format and create the expected Actions JSON response format. You should be able to reuse most of your skill’s logic.
This can be done but it will require some work and you will not have to rewrite all of your code.
Check out this video on developing a Google Home Action using API.AI (that is recommended).
Once you have done the basics and started understanding how Google Home Actions differ from Amazon Alexa Skills, you can simply transfer your logic over to be similar. The idea of intents are very similar but they have different intricacies that you must learn.
When you execute an intent it seems as if your app logic will be similar in most cases. It is just the setup, deploying and running that are different.
I would like to access in real time to the data of the camera to get the hue of several points in order to guide the user (inform him when is the best moment to take the picture).
The application will be probably available on the appstore and then I want to just use allowed API. I've seen a lot of similar topics, some of them telling this is possible but none of them showing a solution.
Do you have any idea for this?
Thanks in advance :)
You need the undocumented UIGetScreenImage() function; an Apple representative recently stated their approval of the use thereof in the iPhone developer forums.