How to move from one level to another in Google Actions Trivia game? - actions-on-google

I've made Google Action using Trivia template. Try it by saying
Ok Google, Talk to LCDP Trivia Challenge
In this, once you finish playing one level, the template asks whether you want to play same level again or not. Instead, I want user to try a new level.(Suppose a user is done playing easy level, then instead of playing the easy level again, I would like to ask whether they want to play Medium or Hard level)
At this moment, the template only allows the user to play same user again and again. But I think if the user has scored well in one quiz they would like to try the next level.
So, how can I suggest different difficulty once the game is completed instead of choosing Yes/No for playing same level again? Can I customize this trivia template?

The resolution of the screenshot isn't very good, I can't read what's written there.
There is no way to further customize the template. If you would want more flexibility, you would need to implement your own action.
All customizations are listed in the list of configuration parameters in the documentation.

Related

Create custom Google Smart Home Action

I have a Google Nest Hub Max and I want to increase its capabilities for a custom need:
"Hey Google, add xyz to my work planning"
Then I want to make an HTTP call to my private server
The private server returns a text
The text is displayed in the Google Nest Hub Max screen + speak-out.
How can that be achieved?
Originally I thought that this will not be difficult. I've imagined a NodeJs, Java, Python or whatever framework where Google gives me the xyz text and I can do my thing and return a simple text. And obviously, Google will handle the intent matching and only call my custom code when users say the precise phrase.
I've tried to search for how to do it online, but there is a lot of documentation everywhere. This post resumes quite well the situation, but I've never found a tutorial or hello world example of such a thing.
Does anyone know how to do it?
For steps 2. and 3., I don't necessarily need to use a private server, if I can achieve what the private server does inside the Smart Home Action code, mostly some basic Python code.
First - you're on the right track! There are a few assumptions and terminology issues in your question that we need to clear up first, but your idea is fundamentally sound:
Google uses the term "Smart Home Actions" to describe controlling IoT/smart home devices such as lights, appliances, outlets, etc. Making something that you control through the Assistant, including Smart Speakers and Smart Hubs, means building a Conversational Action.
Most Conversational Actions need to be invoked by name. So you would start your action with something like "Talk to Work Planning" or "Ask Work Planning to add XYZ'. There are a limited, but growing, number of built in intents (BIIs) to cover other verticals - but don't count on them right now.
All Actions are public. They all share an invocation name namespace and anyone can access them. You can add Account Linking or other ways to ensure a limited audience, and there are ways to have more private alpha and beta testing, but there are issues with both. (Consider this an opportunity!)
You're correct that Google will help you with parsing the Intent and getting the parameter values (the XYZ in your example) and then handing this over to your server. However, the server must be at a publicly accessible address with an HTTPS endpoint. (Google refers to this as a webhook.)
There are a number of resources available, via Google, StackOverflow, and elsewhere:
On StackOverflow, look for the actions-on-google tag. Frequently, conversational actions are either built with dialogflow-es or, more recently, actions-builder which each have their own tags. (And don't forget that when you post your own questions to make sure you provide code, errors, screen shots, and as much other information as you can to help us help you overcome the issues.)
Google's documentation about how to design and build conversational actions.
Google also has codelabs and sample code illustrating how to build conversational actions. The codelabs include the "hello world" examples you are probably looking for.
Most sample code uses JavaScript with node.js, since Google provides a library for it. If you want to use python, you'll need the JSON format that the Assistant will send to your webhook and that it expects back in response.
There are articles and videos written about it. For example, this series of blog posts discussing designing and developing actions outlines the steps and shows the code. And this YouTube playlist takes you through the process step-by-step (and there are other videos covering other details if you want more).

PageViews for Google Analytics Plugin for Unity

I'm learning about Google Analytics for Unity and also learning about Google Analytics in general. For some games, it would be really useful to have page views:
Imagine your game has 20 levels. You want to track what level people get to before they quit because that correlates to how engaged they were and how fun the game is.
As you can see above, the Audience Overview already has a Pages / Session metric. If you could define each level in a game as a page, then the Pages / Session would give you a lot of useful information.
Unfortunately, I don't see a way to set pages in the reference documentation. Does anyone know how I could do this? Is it really easy to make something equivalent with a custom metric/dimension?
To summarize, there are two different answers that would help me and I'd accept either:
A way to use this plugin to define page views
A way to use this plugin to give me something equivalent to Pages / Session (i.e., Levels / Session). But, I'd like an answer for this to include how to view the Levels / Session, not just collect the data.
I figured this out. The mistake I made is creating a GA view of type "Website." I should have created one of type "App." The difference is explained here: https://support.google.com/analytics/answer/2649553#WebVersusAppViews
The plugin has the ability to send ScreenName's which are effectively PageViews. But, unless my view is setup as App, GA won't really give any reports that show the ScreenNames.
So, it was a matter of creating a new view, then sending ScreenNames as described here: https://developers.google.com/analytics/devguides/collection/unity/v4/reference#screen-basic

"Okay Google, show pictures of [PARAMETER PHRASE]"

I'm creating a setup of a Google Assistant/Home that should IDEALLY respond to the phrase "Okay Google, show pictures of [PARAMETER PHRASE]" by giving me the parameter phrase. It also HAS to be able to function like a regular home ("Hey Google, how far away is the moon", "... tell me a joke", etc.), without having me reimplement all of that functionality (unmatched phrases should fallback to the Google Home).
If I use the Home, I'm afraid I won't be able to avoid "... tell [MY APP NAME] to ...", but it has a great mic and speaker built in.
I am alternatively looking into a raspberry pi solution for the added layer of control, but the Home has a fantastic mic and speaker already. And importantly, I absolutely don't want to recreate the core Google Home features (possibly able to pass off uncaught phrases to the Google Home backend?)
I can mask some non-parameterized commands with the Assistant Shortcuts ("Okay Google, cat time!", "Hey Google, show me cats") in order to simplify the call phrase, but that does not work because it's not parametrizable.
TLDR: I have a setup that needs to 1. work like a normal Google Home, but must 2. have additional functionality that I implement. I would like to 3. avoid having to state "... tell MY TARGET APP to [...]", but I need 4. parameters to be passed to my code., even if completely unparsed.
What are my options?
There are a bunch of possible approaches here, depending on the exact angle you want to tackle this. None really are perfect at this time, however, but since everything is evolving, we'll see what might develop.
It sounds like you're making an IoT picture frame or something like that? And you want to be able to talk to it? If so, you may want to look into the Assistant SDK, which lets you embed the Assistant into your IoT device. This would let you implement some voice commands yourself, but pass other things off to the Assistant to handle.
But this isn't a perfect solution, since it splits where the voice recognition works, where it is applied, and may not get you the hotword triggering.
It is also still in an early Developer Preview, so things might change, and it may evolve to be something closer to what you want... but it is difficult to tell right now.
Depending on the IoT appliance you're working on, you may be able to leverage the built-in commands by building a Smart Home Action. However, at the moment, these have a fairly limited set of appliance types they can work with. It also sounds like you're trying to deal with media control - which isn't something that Smart Home directly works with, and is (hopefully) a future Action API (there were some hints about this at I/O, with Cast compatibility promised... but no details).
If you really want to build for the Home and Assistant, you'll need to use the limitations around Actions on Google. And that does include some issues with the triggering name.
However... one good strategy is to pick a name that works well with the prefix phrases that are used. Since "Ask" is a legitimate prefix that Home handles, you could plan for a triggering name such as "awesome photo frame", and make the command "Ask awesome photo frame to show pictures of something".
More risky, since it isn't clearly documented, but it seems that some triggering names work without a prefix at all. So if your application is named "fly to the moon", it seems like you can say "Hey Google, fly to the moon" and the action will be triggered. If you can get a name like this registered, it will feel very natural for the user.
Finally, you can pick a reasonable name, but have your users set an alias or shortcut that makes sense to them. I'm not sure how this would fit in with solution (1), but being able for you to predefine shortcuts would make it pretty powerful.
You can't invoke your app without first connecting to your app using Ok Googe, talk to my app* because if it happens so, it will be like talking to the Core Assistant, not your app.
Google doesn't allow to talk an app without app invoke

Making User Interface smart in eclipse based applications

I am currently developing a desktop application based on eclipse.
Currently the user needs to perform many redundant actions like doing step A in View 1 then doing step B in View 2 then repeat. I am wondering if anybody knows a solution that records/recommends user actions in eclipse based applications.
Maybe based on the history much like the web based solutions.
Any help would be good.
Thanks.
1)
Do you want to record the user clicks (actions)?
If so eclipse provides a Location tracker, so you can analyse the use cases from the field.
OperationHistoryActionHandler
2)
Do you want to have a smarter way the user uses your tool?
Think about using Wizards. in a Wizard you can have a defined number of execution steps. The user does not need to search some button in a view.
With a Wizard a specific execution flow is very clean and good to understand.
3)
As Jonah mentioned you can use cheatsheets as well.
We once did something similar, where we had a rather big user interface that had heaps and heaps and heaps of different functionalities. Our solution was this:
We abstracted all actions into commands. They were all implemented in a way that they can be cascaded, undone, redone etc. See for example IUndoableOperation
The commands had conditions that made it easy to decide if one could combine these commands.
All commands have an ID and can be easily identified
We then continued to integrate our own run configurations. We added a UI that gave the user the option to cascade multiple commands into one big one. For example, A user wanted to create a new file, apply a template, generate some graphs, export them into a given location etc, the user would create a run configuration adding those commands together.
That way we kept the UI comprehensive but gave the expert user the ability to create their own workflow based on what they do every day.
Our users liked that quite a bit.

Creating text based RPG game for iPhone

I would like to create a text based RPG game for iPhone. In this user will have some questions with answers, and navigate to the next page depends on the answer in which the user selects. As per the requirement I have to store the entire story in the app itself. And I need to access each event in the story, when the user selects. So which kind of database can I use?
Please share your ideas.
Thanks.
I'm not so sure you need to create some type of off-device database. If the game has predefined answers and questions then you should probably load the app with each of these on the device.
I'm sure there are plenty of ways to go about this, the simplest being if/else statements. This would obviously be a hindrance and easy to mess up if the storyline is long.
One thing that comes to mind would be to design the story using a Tree representation. The starting point would be the root of the tree, and from there the user would progress down the tree until it eventually reaches a leaf (end). Depending on the number of options a user has for each page, this really wouldn't be too tough to implement if you have any knowledge of this type of data structure.
But, this is just off the top of my head. I would definitely consider the use of a Tree data structure though.