I know WebkitSpeechRecognition is only available on the chromium browser. However I am wondering how it converts the voice into text?
I tried to monitor the network log from developer console on the Google Chrome and I don't see any network activity. I thought I would send API request to the Google but I really don't.
I cannot find any architectural document on this either.
Does any one has any idea?
to my knowledge, there is no official documentation for the Google Speech API that is used in Chromium, but it has been "reversed engineered" by inspecting Chromium's source code
when you search for it, you should find multiple blogs / tutorials that describe, how the REST API can be used
a good description on how to use it, can be found here
http://blog.travispayton.com/wp-content/uploads/2014/03/Google-Speech-API.pdf
(with regard to the description in the PDF: the mentioned "Speech API V1" is deactivated by now, so only the "Full-Duplex API" can be used)
But note, that you need an API key via Google's Developer Console (for the Speech API); and for that you need to be registered in the Chromium Development Group.
Also, using your own key, as of now, the Speech API it is limited to 50 transactions per day.
Related
I am interested in building my own cards (since there isn't any such card that is currently available..?), similar to this question. However no one answered that question previously.
If not, how else can we achieve deep-linking to open other apps (like for example I want to get directions, I don't mind having to open up Google Maps to do so). But it only seems to work for Android, and it is still in Developer Preview..?
I also want to allow the user to click on a card / a button and call a mobile number, but url only takes in http / https URL schemes and not tel://, so that workaround can't work...
You can't build your own rich response types, they are internal features of the Dialogflow platform and the Assistant apps, you can only use them as far as Google exposes them via the APIs. You are not alone in wanting to have more advanced rich responses (I'd like to have free-form HTML cards), but waiting is all you can do here.
I have some experience building chat and voice agents for other platforms, but I’m not using API.AI to understand natural language and parse intents. Do I have to replace my existing solution with API.AI?
Not at all. The advantages of using API.AI in creating a Conversation Action include Natural Language Understanding and grammar expansion, form filling, intent matching, and more.
That said, the Actions on Google platform includes a CLI, client library, and Web Simulator, all of which can be used to develop an Action entirely independent of API.AI. To do this you’ll need to build your own Action Package, which describes your Action and expected user grammars, and an endpoint to serve Assistant’s requests and provide responses to your users queries. The CLI can be used to deploy your Action Package directly to Google, and you can host your endpoint on any hosting service you wish. Google recommends App Engine on Google Cloud Platform.
I found this explanation from the official page most helpful.
API.AI
Use this option for most use cases. Understanding and parsing natural, human language is a very hard task, and API.AI does all that for you. API.AI also wraps the functionality of the Actions SDK into an easy-to-use web IDE that has conveniences such as generating and deploys action packages for you.
It also lets you build conversational experiences once and deploy to many other platforms other than Actions on Google.
ACTIONS SDK
Use this option if you have simple actions that have very short conversations with limited user input variability. These type of actions typically don't require robust language understanding and typically accomplish one quick use case.
In addition, if you already have an NLU that you want to use and just want to receive raw text and pass it to your own NLU, you will also need to use the Actions SDK.
Finally, the Actions SDK doesn't provide modern conveniences of an IDE, so you have to manually create action packages with a text editor and deploy them to your Google Developer project with a command-line utility.
Google is pushing aggressively everybody to API.AI. The only SDK they have (Node.js) no longer supports expected events for instance. Of course, you don't need to rely on their SDK (you can talk to the API directly) but they may change the API too. So proceed with caution.
I was following the below blog and was trying to execute the POC but no luck. i did follow all the steps as suggested however I could not see any report in google analytics after saving the content. No user is shown in report. Please suggest what could be wrong in my implementation.
Reference Link
It is very hard to give a generic answer without looking into the configuration. I just followed the tutorial myself and it all worked fine (to test, I was making curl calls in the terminal window at my laptop and watching Real-Time / Overview report in Google Analytics.)
First and foremost, please check that _system/governance/apimgt/statistics/ga-config.xml has Enabled set to true, and TrackingID set to the UA- tracking code you got from GA.
One thing to check is whether you are looking at Real-Time report or historical. When you just implemented the change - look at Real-Time / Overview report initially as it starts showing data much faster.
Also, since API Cloud has multiple gateway nodes, it takes time for the configuration changes to propagate. So one thing to try is to wait 15 minutes or so from the time you applied configuration changes in the cloud, and then try invoking the API and see if the sessions are reflected in Google Analytics.
Finally, if these do not help, just submit the support ticket in API Cloud - support is included for free with the cloud service.
I have an internal tool written in java. It would be useful to get a little
feedback on how much it is used by colleagues.
A simple solution would be to have the application display an image which it fetches from
a web hit counter like application and just look at how often the image is accessed.
So what I am looking for: a stand-alone application (i.e. no Apache modules, cgi scripts, etc),
which serves one or a couple of static images and and can log accesses, preferably with as
little as possible of support of everything else.
Searching for "hit counter" gave little relevant, "lightweight http server" was more relevant, although mostly overkill still. Any suggestions?
You could try using Google Analytics. Most of the time, people using Google Analytics are tracking pageviews on a web page, and Google Provides some javascript that you can place on your page and it will track the visits to that page as well as browser capabilities/etc. Behind the scenes, that javascript is placing an image tag on the page in the manner you describe.
However, since your application is java and not a web app (I assume it's a standalone and not an applet), you won't be able to include Google's javascript (unless you embed a javascript interpreter...yick). Fortunately, it is possible to use Google's analytics without javascript.
The trick is that Google's scripts use the image http://www.google-analytics.com/__utm.gif and pass parameters via the query string. You can find a list of the parameters you can pass to the query string here. So all you'd have to do is figure out what the query string should be and have your client make the request to google's image (after setting up your google analytics account, of course).
Just use Google Analytics, it's really easy and requires a short script on your pages.
Michal Kebrt's simple UNIX HTTP server does exactly what I was looking for.
I am wondering if the App Store provides an API that allows others to access the data like descriptions, prices, reviews, etc.?
The iTunes Store is the API.
All pages in the iTunes Store are simply XML files rendered by iTunes. You can parse these files yourself and navigate around to your heart's content.
Here's the URL for the front page:
http://ax.phobos.apple.com.edgesuite.net/WebObjects/MZStore.woa/wa/com.apple.jingle.app.store.DirectAction/storeFront
You might also want to see:
http://www.aaronsw.com/2002/itms/
http://www.s-seven.net/itunes_xml
Apple has an official API for the App Store, it's named iTunes Search API. In the documentation there are also some examples on how to use the "lookup" and "search" endpoints, quite easy to use and data is returned in JSON format :)
Unfortunately that's not the same with Google Play (previously known as Android Market) which does not expose apps' meta-data through an API.
To get that data for Android, you could develop your own HTML crawler, parse the page and extract the app meta-data you need. This topic has been covered in other questions, for instance here.
If you don't want to implement all that by yourself, you could use a third-party service to access Android apps meta-data through a JSON-based API.
For instance, 42matters.com(the company I work for) offers a unified API for both Android and iOS, here more details:
https://42matters.com/app-market-data
The endpoints range from "lookup" (to get one app's meta-data, probably what you need) to "search", but we also expose "rank history" and other stats from the leading app stores. We have extensive documentation for all supported features, you find them in the left panel: https://42matters.com/docs/overview
I hope this helps, otherwise feel free to get in touch with me. I know this industry quite well and can point you in the right direction.
Regards,
Andrea