Every time I try to publish my actions on google for alpha release it denies the request with this unclear error message:
For en: Your sample invocations are structured incorrectly. Make sure they all start with a trigger phrase, include either your app name or pronunciation, and successfully invoke your app.
My trigger phrases all passes the invocation page rules and it works fine in the simulator. this is my invocation phrase
Talk to Doctor Electronics
Any one has a clue?
It sounds like it is talking about the sample invocations configuration, which is in the Directory information page of the console for your Action under the Details section. Make sure these phrases are correctly structured and make sense for your action.
Related
I want to be able to talk with Google Assistant, but connect the Actions project directly to an NLP service I already have running on my server. In other words, NOT use dialogflow.
All the following examples show how to do this.
With Rasa
https://blog.rasa.com/going-beyond-hey-google-building-a-rasa-powered-google-assistant/
With LUIS
https://www.grokkingandroid.com/using-the-actions-sdk/
https://dzone.com/articles/using-the-actions-sdk-for-google-assistant-develop
With Watson
https://www.youtube.com/watch?v=no0R0bSkHXc
They use the actions.intent.MAIN as the invocation and actions.intent.TEXT for all other utterances from the talker.
This is what I need. I don’t want to create a load of intents, with utterance phrases, inside the Action because I just want all the phrases spoken by the talker to be passed to my server, and for my NLP service to deal with them.
So I set up a new Action project, install the Actions CLI and then spend 3 days trying all possible combinations without success, because all these examples are using gactions cli 2.1.3 and Google have now moved on to gactions cli 3.1.0.
Not only have the commands changed, but so too has the file formats and structure.
It appears there is also a new Google Actions Console, and actions.intent.TEXT is no longer available.
My Action is webhook connected to my server, but I cannot figure out how to get the action.intent.TEXT included and working.
Everything I find, even here
Publishing Actions on google without Dialogflow
is pre version update and follows the same pattern.
Can anyone point to an up-to-date, v3.1.0, discussion, tutorial or example about how to send all talker phrases through to an NLP that isn’t dialogflow, or has Google closed that avenue?
Is it possible to somehow go back and use the 2.1 CLI either with the new Console or revert the console back. (I have both CLI versions, I can see how different their commands are)
Is it possible to go back and use 2.1?
There is no way to go back to AoG 2. You probably also don't want to do so - newer features aren't available with v2 and are only available with v3.
Can I use my own NLP with v3?
Yes, although it isn't as obvious, and there are some changes in semantics.
As an overview, what you'll need to do is:
Create a Type that can accept "Free form text". I usually call this type "Any".
In the console, it looks something like this:
Create a Custom Intent that has a single parameter of this Any Type and at least one phrase that captures everything for this parameter. (So you should add one training phrase, highlight the entire phrase, and set it for the parameter. Sometimes I also add additional phrases that includes words that I don't want to capture.) I usually call the Intent "matchAny" and the parameter "any".
In the console, it could be something like this:
Finally, you'll have a Scene that you transition to from the Main invocation. When it matches the "matchAny" Intent, it should call your webhook with a handler name. Your webhook will be called with the "any" parameter set with the user utterance. (Note that the JSON has also changed.
Again, the console might have it looking something like this:
That seems like a lot of work. Isn't there just some way to do all that from the command line?
Yes. You can do all of that in the configuration files that the CLI accesses and then upload it. (You can then also use the console to review the configuration, if necessary, to make sure they're configured as you expect. You can shift back and forth between them as appropriate.)
Google also has a github repository that contains most of the files pre-configured for this sort of setup.
You will need to update the configuration from the repository to handle the webhook correctly (it includes code to illustrate what is happening using the inline code editor) and to add your project ID.
I deployed the production release for my first Google Action. But even 2 weeks after the approved deployment I can find my action neither in the Google Assistant store nor directly via voice activation.
The test versions ran fine and was found every single time. So the invocation name should not be the problem. When I use the generated action link from the console I find the action in the store and can send it to my Smartphone, where I can start it with the sent link. But after closing the app I cannot open it again via voice.
I used two different google accounts on different devices (all are set up in the right language: german), but no chance.
Is this a mistake on Googles side or do I miss anything? In this state I have to open the action every time over the action link which is useless for a voice app :)
Here is the link to my action: https://assistant.google.com/services/invoke/uid/000000c77f740137?hl=de
The invocation would be for example: "Mit Erfolgs-Fans sprechen" (like said, on the test this name was found every single time)
This happened to me too. Though it wasn't in production when I faced it. I was also not able to see the action in Assistant-enabled devices (Google home app/assistant).
It is probably not the invocation name issue, it is just that the action is not being made visible across all the platforms. Some of the solutions I tried were-
Clean cache of the device
Create a new action with everything just the same as in your current actions. It worked once for me!
Go to GCP and under your project, try to understand if there is any pending activity.
Once I created so many actions because I wasn't able to see it that after around 30 days all those actions started to become visible. So, if you can wait that is fine.
In the end, do contact their support with all the relevant information. I hope they should be able to help.
Thank you!
I'm setting up about my 10th (test/demo) Action on Google, and the simulator ONLY supports me starting my brand new app via "Talk to my test app".
How do I get to the point I can say "Talk to {my app name here}"?
IN THE PAST, after setting up all the Action details (via 'Overview' screen), and clicking 'TEST DRAFT', it usually resolves and the simulator starts suggesting to me "Talk to {my app name here}".
But this time it's not happening. It's stuck on "Talk to my test app", and has been for almost 24 hours.
Does anyone know the magical incantation required to get the Actions Simulator to accept invocations using my actual app name?
I'm using Dialogflow, which was all setup using the Jovo framework actually - using 'jovo deploy' to stand up the Dialogflow agent and Actions config. I can't see that Jovo would have anything to do with the issue here, but you never know with software! All the config in Dialogflow and Actions looks good to me.
What you've described sounds like it should work, but it sounds like you should double-check for any errors.
In the Actions Console, make sure you don't have any errors indicated on the overview screen. If you do (or even if you don't), click on the Edit button for stage 2: App Information.
In there, check the error messages to make sure they're not applying to the name or pronunciation. Make sure both the name and pronunciation are valid and accepted.
If you're working in more than one language, make sure these are set ok for all the languages you have set.
Test it from the Simulator link on the left navigation instead of going back through Dialogflow. They should do the same thing, but it sounds like it is possible that isn't taking in this case.
I get error message when try to publish actions and submit it for review. It says query pattern missing on default welcome intent. My query pattern is not missing at all. Can someone please explain to me why I get this error and how to fix it so I can submit my action? I have submitted my question to tech support and ofcourse nobody has ever responded to my message. Thank you for any help that you can provide.
Here is screenshot of the default welcome intent.
Thank you for any help that you can provide.
I was getting this error because i had added two languages(English and Hindi) and i haven't added any training phrases for Hindi. So either add training phrase for all languages or remove the language u don't need in the intents. This solved the problem for me.
I got this error because I had an intent in the "implicit invocation" list that was triggered by an event from the fulfillment server.
The implicit invocation list should only contain deep-linkable intents such as "OK Google, ask Personal Chef for a hot soup recipe". Not callbacks/returned events without a spoken trigger.
You can edit the implicit invocations under Integrations > Google Assistant.
I have figured out what's the problem and how to fix it.
The problem is occurring because at once, you had not input the query and you would have tried for submitting the project but the error prompted. After writing the queries, you again went to Actions Console for submitting the project but again it showed the same prompt.
Now the problem which occurred is that actions console was not updated with the dialog flow. To update the console, you have to go to your project settings and then go to 'Environment'. There you have to create an environment and then publish it. Then head towards actions console, there you will be able to submit you action.
Learn about environment here.
I want to deploy this example on glitch. I've added package.js and index.js to my glitch project and built successfully.
However, the code is missing a section to listen for HTTPS requests. In most node.js/express webapps, there is code to indicate which paths trigger which functions, but this is missing from the example. Can you explain to me how it should work and why that part is missing from this example?
It's not clear what do you mean by "the code is missing a section to listen" as the only main feature of index.js is to listen to requests and return information.
I suggest you check index.js and make sure that you getting requests to your end point on glitch.
Also, it would be helpful if you can share your glitch project over here at SO so we could see what you are doing.
Btw, you might want to double check that you have all the packages
I also created this simple example on Glitch - It's returning the current bitcoin price. Feel free to remix it and use the code there for your own action.
Good luck!
The part that "listens to requests" is
// The Entry point to all our actions
const actionMap = new Map();
actionMap.set(ACTION_PRICE, priceHandler);
actionMap.set(ACTION_TOTAL, totalHandler);
actionMap.set(ACTION_BLOCK, blockCountHandler);
actionMap.set(ACTION_MARKET, marketCaptHandler);
actionMap.set(ACTION_INTERVAL, intervalHandler);
assistant.handleRequest(actionMap);
where each ACTION is an action(in an intent) in Dialogflow and the handler is the corresponding function in your code.
I'd recommend you take a look at
https://codelabs.developers.google.com/codelabs/assistant-codelab/index.html?index=..%2F..%2Findex#0
If you want a good example of an assistant app, though this uses firebase instead of glitch.