I have an Actions on Google agent built with DialogFlow with several actions (e.g. actions.intent.MAIN and get_day_of_week).
When I created my agent 3+ months ago, I could invoke the agent in two ways:
With the agent's name (e.g. "Talk to My Agent"), which would launch the actions.intent.MAIN intent.
With the grammar specific to an action (e.g. "Ask My Agent what day of the week is it"), which would launch the get_day_of_week action.
Without changing anything, launching the agent with a custom action (#2 above) stopped working. Is there a way to debug this?
In the simulator, when I type "Ask My agent what day of the week is it", the request and response are empty, and the dialog in the simulator says, "The agent returned an empty TTS". I'm not sure if the request and response are empty because the simulator doesn't support launching custom actions, because Actions on Google stopped supporting launching custom actions, or because my agent broke (even though I didn't change anything). For what it's worth, this same problem happened to two distinct agents that I have.
I'm guessing there's nothing for you to debug; this appears to be a Google bug. I had the exact same thing happen to me on an action we have in production. There's no way I could have changed anything.
Here's my Reddit post, if you wanna follow.
Interestingly, the deep links don't work for me (and several others), but do work for my co-worker. And one of the commenters says deep links don't work for him unless he types it in the console. 🤷♂️
Related
I'm currently migrating from Dialogflow to Actions Builder, things have gone well so far however after adding custom intents to my scenes the test simulator prompts me with the warning "Intent 'intent_name' is used as an action, but not added as a global event." blocking my ability to test the action until I configure the intent as global.
Since configuring intents as global enables implicit invocation it seems inappropriate to apply it to all intents, especially those which have no business being accessed implicitly.
Has anyone experienced this warning? Any tips to get past this error without configuring the intent as global?
Cheers
Additional info on scenes and deep link actions:
On enter -> Welcome intent:
Enter condition: Call 'Welcome' webhook.
User intent handling: When 'intent_name' is matched -> call webhook 'intent_name'. No transition, no web based send prompts.
Launch test simulator, try to enable test, get prompted to make 'intent_name' a global event.
Within the intent, 'is this a global event' no is selected as it's mid conversation not suitable for implicit/deep-linked entry. No errors/warnings reported in the sdk for the intent.
Additional project info:
Initially created the project last year by using the built in migration tool; migration efforts stalled as the test simulator ran into other issues which eventually resolved themselves https://github.com/actions-on-google/assistant-conversation-nodejs/issues/9
After the above blockage I had continued Dialogflow development, so a new migration was necessary due to significant changes. Rather than use the built in migration tool I chose to delete the previously imported intents & types then manually imported the data using the Gactions CLI tool.
Perhaps it'd be easier to just use a new Google project? I don't see any misconfiguration in the intent nor scene, so perhaps the project is corrupted somehow?
My app has certain commands that seem to conflict with the Google Assistant's built in behavior. Even though I'm in a conversation with my app and have explicitly asked for a response, the Google Assistant takes over when the user says "Read the note" or "play the tape". In the first case it pops into reading my keep notes. In the second it launches YT music and plays something. I want those commands to be fulfilled by my app!
I've tried training on those specific phrases through the dialogflow console but it didn't seem to help. Is there any way to ensure my app processes all commands while it's in a conversation? Or at least a few specific ones?
I should note, otherwise, commands work perfectly. Even similar ones like "look at the note" work. It's those specific commands causing issues. "Play ____" seems to always launch YT Music though. Commands like "look at the note" go through my TEXT intent and not a fallback intent.
Very new to Google Actions. Testing out tutorial stuff.
I have tried this in a couple of test projects just to double check. After initial run of any project I do not get any updates on draft projects. No changes show up in draft projects for me for both simulator and real device.
Started new project
Even blank project has basic conversation telling you where to add things next.
Change text.
Notice prompt does not change in testing environment.
In below pictures I have changed the words "hello world" with "Hey Dude" for both fulfillment and console output. I would expect Testing Prompt to respond with "Hey Dude from fulfillment and Hey Dude from the console!" But it does not. Instead it does not reflect any recent changes.
I think there may be two slightly different (but sometimes related) issues going on here.
The first is that there are known problems with the simulator being slow to pick up on updates, or them not seeming to show up. The second has to do with making sure you're deploying changes from the build-in code editor.
I don't have a clear answer to the first problem, although I know they're looking into it. I find that I can make some changes and they may not be noticed, but I know they have been picked up if I see the "Your preview is being updated..." spinner appear. There are other spinners that sometimes appear, but unless it explicitly says that it is being updated - the updates aren't always picked up. (Sometimes they are, however.)
Usually, if I don't see this, I'll go back and force an apparent change (delete a character from a webhook handler name, then add it back) and go back to the simulator. In general, this time it will say it is updating.
If you're using the Cloud Functions editor, you need to do three things:
Save the changes. You'll do this by clicking the "Save Fulfillment" button, but this only saves it so you can leave the editor. It doesn't mean that the simulator has access to it yet.
Deploy the changes. This deploys your code to Cloud Functions so they can be run. Note in the illustration that it says the code is saved, but not yet deployed.
Wait till the changes are fully deployed. Deploying takes time, and until it is completed, it won't be available in the simulator. While deploying, it lets you know.
Once it has deployed, however, the message changes, and you the impacts should be available through the simulator (although you may still need to see the "being updated" message to be sure).
Remember, however, that you don't need to use the "Cloud Functions editor" in order to deploy a webhook. You can deploy a webhook on any web server where
The host is public (so has a public IP address that Google can reach)
It can handle HTTPS with a non-self-signed certificate
You can even deploy yourself to Cloud Functions for Firebase, which is the same service that the Actions Builder uses. This way you have the URL set once in the Actions Builder and, once it is set, you won't need to change it.
But you'll still be able to change your code by managing your own deployment separate from Actions Builder.
First time here (please be gentle). I've built and had an action approved. For the life of me, I cannot work out how to run it.
Published
In simulation mode I got a test run to work on my google home. However, my invocation doesn't run it, and I can't seem to find a portal to download it or similar.
invocation
How do you people use it now that it has been approved? I've checked my directory but there is no data.
directory
You will be able to use the same invocation phrases that you used for testing in the simulator.
Although it is "published", it does take time to be distributed to all of Google's servers and for it to be available on everyone's Assistant. There isn't any way you can rush this process - it usually takes a couple of days.
You can look at your Action Console, click on "Analytics" on the left, and then the "Directory" tab on top to see how it appears in the directory.
If you view the directory URL from a mobile device, you can also invoke the Action from the directory entry itself once it is available.
You will never "download" or "install" it on a Google Home. Think of it as a website and your Google Home as a browser.
We have a legacy server service running on a Windows 7 desktop that keeps crashing with a popup window reporting a memory error. The popup stops all processing on the machine. Once the "OK" button is clicked on the popup the system recovers and moves on. The root problem appears to be inside a compiled DLL that the application uses.
This popup usually happens between 9pm and 11pm every couple days.
It happens when no one is signed into the PC, so the popup displays in front of the CTRL+ALT+Delete message for signing in.
I can click OK and it continues processing, signing into the computer.
CHALLENGE:
This is a legacy application that will be replaced when budget allows (maybe next Summer) so there is no budget for upgrade or paying a consultant to fix the root problem.
All we need to do is click the OK button when the "Application Popup" event is thrown (logged in the Event Manager)
I know that it would be WRONG to write a script to satisfy the popup. Fixing the root cause is the CORRECT action.. but we have no support to spend money at this time. And since it's a compiled DLL, we can't fix the code.
Is there a PowerShell script that could:
Watch for a specific event "Application Popup" and if it occurs simulate pressing the ENTER key?
Run in the background, signed out of a user account.
If PowerShell isn't the answer, is there a better macro or script tool to get us by?
I know it's "bad practice" but we just need to get along until we get some budget dollars.
Powershell probably isn't the best answer in this case. I'd suggest using something like AutoIt (the WinWaitActive function would be useful in your case).
I have used AutoIt in the past and have found it very useful for Windows GUI automation.