Can a private, alpha released, action remove the need to use the phrase "talk to [my app]" in invocation?
For example, is it possible to invoke the action by saying:
"Ok Google, tell me my agenda." Automatically invoking the private action.
In contrast to, "Hey Google, tell My App to tell me my agenda."
Implicit invocation seemed promising, but further research made me think it was just a way for Google to suggest your app to a user. Am I mistaken in this interpretation?
Implicit invocation is a feature, similar to built-in intents to invoke an action directly without the talk to my app prefix. These features should work for public as well as alpha release actions.
I don't know if agenda specifically will be supported, as that is something that can be handled directly by the Google Assistant, but a query like play a game will suggest your alpha released action.
Related
I want to be able to talk with Google Assistant, but connect the Actions project directly to an NLP service I already have running on my server. In other words, NOT use dialogflow.
All the following examples show how to do this.
With Rasa
https://blog.rasa.com/going-beyond-hey-google-building-a-rasa-powered-google-assistant/
With LUIS
https://www.grokkingandroid.com/using-the-actions-sdk/
https://dzone.com/articles/using-the-actions-sdk-for-google-assistant-develop
With Watson
https://www.youtube.com/watch?v=no0R0bSkHXc
They use the actions.intent.MAIN as the invocation and actions.intent.TEXT for all other utterances from the talker.
This is what I need. I don’t want to create a load of intents, with utterance phrases, inside the Action because I just want all the phrases spoken by the talker to be passed to my server, and for my NLP service to deal with them.
So I set up a new Action project, install the Actions CLI and then spend 3 days trying all possible combinations without success, because all these examples are using gactions cli 2.1.3 and Google have now moved on to gactions cli 3.1.0.
Not only have the commands changed, but so too has the file formats and structure.
It appears there is also a new Google Actions Console, and actions.intent.TEXT is no longer available.
My Action is webhook connected to my server, but I cannot figure out how to get the action.intent.TEXT included and working.
Everything I find, even here
Publishing Actions on google without Dialogflow
is pre version update and follows the same pattern.
Can anyone point to an up-to-date, v3.1.0, discussion, tutorial or example about how to send all talker phrases through to an NLP that isn’t dialogflow, or has Google closed that avenue?
Is it possible to somehow go back and use the 2.1 CLI either with the new Console or revert the console back. (I have both CLI versions, I can see how different their commands are)
Is it possible to go back and use 2.1?
There is no way to go back to AoG 2. You probably also don't want to do so - newer features aren't available with v2 and are only available with v3.
Can I use my own NLP with v3?
Yes, although it isn't as obvious, and there are some changes in semantics.
As an overview, what you'll need to do is:
Create a Type that can accept "Free form text". I usually call this type "Any".
In the console, it looks something like this:
Create a Custom Intent that has a single parameter of this Any Type and at least one phrase that captures everything for this parameter. (So you should add one training phrase, highlight the entire phrase, and set it for the parameter. Sometimes I also add additional phrases that includes words that I don't want to capture.) I usually call the Intent "matchAny" and the parameter "any".
In the console, it could be something like this:
Finally, you'll have a Scene that you transition to from the Main invocation. When it matches the "matchAny" Intent, it should call your webhook with a handler name. Your webhook will be called with the "any" parameter set with the user utterance. (Note that the JSON has also changed.
Again, the console might have it looking something like this:
That seems like a lot of work. Isn't there just some way to do all that from the command line?
Yes. You can do all of that in the configuration files that the CLI accesses and then upload it. (You can then also use the console to review the configuration, if necessary, to make sure they're configured as you expect. You can shift back and forth between them as appropriate.)
Google also has a github repository that contains most of the files pre-configured for this sort of setup.
You will need to update the configuration from the repository to handle the webhook correctly (it includes code to illustrate what is happening using the inline code editor) and to add your project ID.
I deployed the production release for my first Google Action. But even 2 weeks after the approved deployment I can find my action neither in the Google Assistant store nor directly via voice activation.
The test versions ran fine and was found every single time. So the invocation name should not be the problem. When I use the generated action link from the console I find the action in the store and can send it to my Smartphone, where I can start it with the sent link. But after closing the app I cannot open it again via voice.
I used two different google accounts on different devices (all are set up in the right language: german), but no chance.
Is this a mistake on Googles side or do I miss anything? In this state I have to open the action every time over the action link which is useless for a voice app :)
Here is the link to my action: https://assistant.google.com/services/invoke/uid/000000c77f740137?hl=de
The invocation would be for example: "Mit Erfolgs-Fans sprechen" (like said, on the test this name was found every single time)
This happened to me too. Though it wasn't in production when I faced it. I was also not able to see the action in Assistant-enabled devices (Google home app/assistant).
It is probably not the invocation name issue, it is just that the action is not being made visible across all the platforms. Some of the solutions I tried were-
Clean cache of the device
Create a new action with everything just the same as in your current actions. It worked once for me!
Go to GCP and under your project, try to understand if there is any pending activity.
Once I created so many actions because I wasn't able to see it that after around 30 days all those actions started to become visible. So, if you can wait that is fine.
In the end, do contact their support with all the relevant information. I hope they should be able to help.
Thank you!
I would like to control blinds using google smart home action. How can I create commands like "turn/put my blind up/down" ? What device traits should I use? It seems OnOff trait doesn't understand up and down, can I custom it? Thanks!
You could use the undocumented (use at your own risk) device type: action.devices.types.BLINDS.
Instead you could use for traits:
On/Off: action.devices.traits.OnOff
Brightness: action.devices.traits.Brightness
In this way, you can ask Google to set a specific position, to close (in Italian it works as a turn off command, in English, I did not try yet), to turn on or to turn off. The open command instead seems to be not recognized as a turn on command.
Hope to help you and hope that Google releases soon types and traits for blinds/curtain control.
EDIT as pointed out by #robin-thoni is not documented: https://developers.google.com/actions/smarthome/guides/blinds
I'm setting up about my 10th (test/demo) Action on Google, and the simulator ONLY supports me starting my brand new app via "Talk to my test app".
How do I get to the point I can say "Talk to {my app name here}"?
IN THE PAST, after setting up all the Action details (via 'Overview' screen), and clicking 'TEST DRAFT', it usually resolves and the simulator starts suggesting to me "Talk to {my app name here}".
But this time it's not happening. It's stuck on "Talk to my test app", and has been for almost 24 hours.
Does anyone know the magical incantation required to get the Actions Simulator to accept invocations using my actual app name?
I'm using Dialogflow, which was all setup using the Jovo framework actually - using 'jovo deploy' to stand up the Dialogflow agent and Actions config. I can't see that Jovo would have anything to do with the issue here, but you never know with software! All the config in Dialogflow and Actions looks good to me.
What you've described sounds like it should work, but it sounds like you should double-check for any errors.
In the Actions Console, make sure you don't have any errors indicated on the overview screen. If you do (or even if you don't), click on the Edit button for stage 2: App Information.
In there, check the error messages to make sure they're not applying to the name or pronunciation. Make sure both the name and pronunciation are valid and accepted.
If you're working in more than one language, make sure these are set ok for all the languages you have set.
Test it from the Simulator link on the left navigation instead of going back through Dialogflow. They should do the same thing, but it sounds like it is possible that isn't taking in this case.
I'm creating an agent that interacts with an API I created, Auroras.live. However I always have troubles invoking the test version of the agent from my Google Home.
I really have to stress the "S" in Auroras, and I also have to say "dot", otherwise Google Home interprets it (I think) as Auroras Live, or Aurora.live, without the dot or "S"
This is definitely going to be a problem for others too, as they might not know to pronounce the dot, or forget to stress the "S", and as a result will get frustrated & not use my agent.
While filling out the app details, I tried using different invocations (such as "Talk to Auroras dot live" and "Speak to Aurora Live"), but it wouldn't let me do it, because I needed to use the exact title of my app.
What should I do? Should I (or can I) submit it as an easier to pronounce name (like "the aurora app")? Can I somehow tell Google to accept it with or without the "S" / dot? Any suggestions welcomed.
This is definitely a case where you would want your invocation name to be (slightly) different than your display name. I would list "Auroras Live" as your display name and "Aurora live" as the invocation name.
As part of the testing instructions, explain the problems you're seeing to the tester and request that both invocations be allowed.
If you want to clearly associate it with the auroras.live website, you could also mention that in the testing instructions (to include the dot), but you should probably also consider including a link to the site from the description and possibly from the action itself.