Google Action Development using Python - actions-on-google

I've been building an alexa apps of alexa using skills kit sdk, python and aws Lambda Functions but I'm new to developing google home apps. There are many sample projects on github but they all are on node.js. I'm a python developer so I need to build google home app using python and google cloud functions. Just like in Alexa, where there is a Developer portal and intent there matches the intent written on console portal using aws Lambda functions and we mapped both portals using Skill ID and arn #.
Just like, alexa's color skill kit using sdk sample (below link)
https://github.com/alexa/skill-sample-python-colorpicker/blob/master/lambda/py/lambda_function.py
Is there any sample code for it or anything one can help me with. It'll be highly appreciated.

There is currently no official Python library for Actions on Google. You may find unofficial ones.
You alternatively can just return JSON directly instead of using a library to wrap the JSON in easier to read methods.
For example, a simple response in Node.js:
conv.ask(new SimpleResponse({
speech: 'Howdy, this is GeekNum. I can tell you fun facts about almost any number, my favorite is 42. What number do you have in mind?',
text: 'Howdy! I can tell you fun facts about almost any number. What do you have in mind?',
}));
is equivalent to the following Dialogflow webhook JSON:
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Howdy! I can tell you fun facts about almost any number, like 42. What do you have in mind?",
"displayText": "Howdy! I can tell you fun facts about almost any number, like 42. What do you have in mind?"
}
}
]
}
}
}
}

Related

What are Visual Studio Code experiments?

Today I was surprised to find an "Enable Experiments" option under VSCode's Workbench settings, turned on by default.
The setting's description is "Fetches experiments to run from a Microsoft online service" which seems rather vague to me. I tried googling this but didn't find any clear answers.
So, does anybody know what those "experiments" are and if it would probably be better to turn this off?
This is one of the case where using open-source software is a good idea. Because the source code of visual studio code is published in https://github.com/Microsoft/vscode. We could try to search in where the code would be used.
First, we could try to search the string Enable Experiments. And see, to which action the option is tied to. From there, I see that, the file src/vs/workbench/contrib/experiments/node/experimentService.ts is using it. Specifically, when trying to load an experiment in line 173
if (!product.experimentsUrl || this.configurationService.getValue('workbench.enableExperiments') === false) {
We see that, the code would check for "experiment URL". this could be seen in product.json which #Joey mentioned in the comment. In my case, the text looks like this.
"experimentsUrl": "https://az764295.vo.msecnd.net/experiments/vscode-experiments.json",
From there, we could see the content of the JSON file by making a GET request to that URL. And, it returns this (at least, at the time I make the request)
{
"experiments": [
{
"id": "cdias.searchForAzure",
"enabled": true,
"action": {
"type": "ExtensionSearchResults",
"properties": {
"searchText": "azure",
"preferredResults": [
"ms-vscode.vscode-node-azure-pack",
"ms-azuretools.vscode-azureappservice",
"ms-azuretools.vscode-azurestorage",
"ms-azuretools.vscode-cosmosdb"
]
}
}
}
]
}
Based on the response, I could see that, it try to alter my search result if I search using "azure" key word. Which I tried, and the search result shows the 4 items there on top of the result search.
As to whether to disable it or not. On safe side (if you don't want for it to alter your experience using vscode) I think you would want to disable it. But, I don't think microsoft would do something crazy.
I just noticed this one and was curious about it as well. A search through the VS Code release notes finds one reference to it in July 2018. workbench.enableExperiments is listed as one of the settings for VS Code's "Offline mode": https://code.visualstudio.com/updates/v1_26#_offline-mode
The description of offline mode suggests that this settings is for "A/B experiments":
To support this offline mode, we have added new settings to turn off features such as automatic extension update checking, querying settings for A/B experiments, and fetching of online data for auto-completions.
As mentioned by others, the source code for VS Code shows this setting being used in experimentService.ts: https://github.com/microsoft/vscode/blob/93bb67d7efb669b4d1a7e40cd299bfefe5e85574/src/vs/workbench/contrib/experiments/common/experimentService.ts
If you look at the code of experimentService.ts, the stuff it's fetching seems to be related to extension recommendations, notifications about new features, and similar things. So it looks like the experiment service is for fetching data to do A/B testing of feature and extension recommendations to users.

google actions: reprompts not showing

I am trying to get the system to prompt if the user is silent or not entered any response. This is using actions sdk.
As per the documentation (https://developers.google.com/actions/assistant/reprompts), I set the conversation object in the json as:
"inDialogIntents": [
{
"name": "actions.intent.NO_INPUT"
}
]
Then, in the functions code I have the following:
app.intent('no_input', conv => {
conv.ask('Hello');
});
Yet there has been no response even after waiting for a few minutes. I even tried
app.intent(actions.intent.NO_INPUT, conv => {
conv.ask('Hello');
});
but the code has not been called. Can someone share what needs to be done to get this working? Thanks.
Here's a more detailed version of my comment:
First of all, smartphones DO NOT have no-input support, as they close the mic automatically when the user doesn't say anything and they also make it visually clear. So if you're testing on a smartphone, that's the reason you're not seeing your reprompts.
As for testing of the no-input prompts, it can be rather hard to test on a Google Home. Maybe you don't have access to one or you don't want to wait awkwardly staring at your device. For these reasons we have "No Input" button in the Simulator:
You can use this button to simulate a No Input prompt. If that still doesn't solve your problem, then you can assume there's something wrong with your code.

restful simple web api , how to make this project? [duplicate]

This question already has answers here:
What Scala web-frameworks are available? [closed]
(18 answers)
Closed 5 years ago.
I am trying to create a web app in React.js and Scala. So I have a programing code in Scala which just prints a random Name with the current time it prints in my stdout as Json object. It looks something like this.
{ Name : Ash TimeLastActive: 14:24:06:6456}
{ Name : Kum TimeLastActive: 15:44:06:6456} ...
first thing is i want this println message to go to responses of my web app that I am about to create in react. how can i do that in scala?
my react webapi would be very simple. It would have a start button which should run the scala program and whatever the program outputs on stdout should come to my webpage and diplsay there till i press the stop button. How should i handle this in the front end side .
Can you tell me hwat all technologies should i need to be familiar with to go ahead and make this project happen and at which point do i need which technology.
I recommend you have a look at scalatra http://scalatra.org/ which will act as your api endpoint so that your web can request for it then you need
you should use the react-create tool to help you create a react project
axios or bluebird to request from your api (Play 2) Library is a bit overkill again you should do some more research
some example endpoint you could try
send a json get request to /random (this is when you run your scala program and send a json)
and then return as a json for react to process (return http status code 200)
the stuff you want to display
[
{
"Name" : "John Doe",
"timestamp": "some timestamp"
},
...
]

Provide REST API from Ionic App

is it possible to provide a REST API from an Ionic App?
I tried to install express, to receive REST calls, but no luck so far.
Background is, that I want to call from one Ionic2 app a method of another Ionic2 app.
I looked hours around but couldn't find a way to do such thing. I know this is not the common way, but it's necessary for my case, because it should replace push notifications in a quite dirty way (due to missing dev accounts and it's just for demonstration purpose)
Have you looked into the Ionic Native httpd plugin? This may provide what you are looking for with a few tweaks
Usage example straight from their docs:
import { Httpd, HttpdOptions } from '#ionic-native/httpd';
constructor(private httpd: Httpd) { }
...
let options: HttpdOptions = {
www_root: 'httpd_root', // relative path to app's www directory
port: 80,
localhost_only: false
};
this.httpd.startServer(options).subscribe((data) => {
console.log('Server is live');
});

IBM Watson visual recognition - invalid API key

I'm trying to use the visual recognition from IBM Watson using their API.
Here is the POST request I am sending:
https://gateway-a.watsonplatform.net/visual-recognition/api/v3/classify?api_key={MY_API_KEY}&version=2016-05-20 and I specify my image in the body parameter.
I always get:
{
"status": "ERROR",
"statusInfo": "invalid-api-key"
}
I got my key from Bluemix 3 hours ago (they said the key should be working in 5 minutes).
Any ideas? Thanks
EDIT
Since this morning, I have another error:
{
"status": "ERROR",
"statusInfo": "invalid-api-key-permissions"
}
Is this me or Watson is still under maintenance?
The Visual Recognition service was experiencing problems recognizing keys; the development team has resolved the problem as of July 14. (There is additional discussion of this issue on the IBM developerWorks Answers forum, and you can open a support ticket on a specific key issue at Bluemix support).