How to choose correct "scenario" when using Microsoft Cognitive Service (Bing Speech API, Speech to Text)? - bing

I found it seems have many scenarios support in Bing Speech API,
1. websearch
2. catsearch
3. smd
4. ulm
Are there any other scenarios support? and where can I find the documents
about how to choose scenario?
Thanks in advance

Related

How do I use IBM Watson's Visual Recognition service with Dart in Flutter?

I want to use IBM Watson's recognition service, specifically their waste identifier as shown below. https://developer.ibm.com/technologies/artificial-intelligence/patterns/recycle-with-watson/
It only talks about using it in an iOS application but I want to use it with Dart in Flutter. I am not too clear how to do this so if someone can teach me some of the basics of this that would be great. Btw, I do not want to train a IBM visual recog. model, I want to use the classifier that IBM provides as shown in the link above.
I suggest to start with the Getting Started which shows how to provision the Visual Recognition service. The next step is to either reuse their server application which includes loading the training data or to explore writing your own. See the docs for links to either the API (works with every programming language) or SDKs for some more common programming languages.
There is easy-to-follow API doc to help you to get started with Visual Recognition service.
Flutter has a very good documentation on how to make API calls in their documentation
Not recommended but to help you with the basics - There is an old Flutter package (last updated in 2018) that has Dart code samples to get you started.

How to train IBM Watson Assistant to answer from a specific dataset (say a eBook)?

I am a new bee to IBM Watson. I went through videos to create virtual assistant/chatbot where we could define intents/entities and answer accordingly. This seems fine when I have limited number of intents/entities. But say, I have a eBook and I want to train Watson to answer from this eBook. How do I achieve this. Anyone high level approach or direction will be really helpful.
There are different approaches.
You could use the integrated search skill which provides a link to Watson Discovery. You would upload your eBook to Watson Discovery and kind of index it.
Another approach is to use a database or something else as backend. Based on the input which identifies the search term and scopes which eBook to search, the answer would be retrieved from the backend database. This tutorial features a Db2 database and Watson Assistant retrieves the answer from the database. A similar approach is taken in this sample which shows how to retrieve excerpts from Wikipedia.

WebkitSpeechRecognition Architecture

I know WebkitSpeechRecognition is only available on the chromium browser. However I am wondering how it converts the voice into text?
I tried to monitor the network log from developer console on the Google Chrome and I don't see any network activity. I thought I would send API request to the Google but I really don't.
I cannot find any architectural document on this either.
Does any one has any idea?
to my knowledge, there is no official documentation for the Google Speech API that is used in Chromium, but it has been "reversed engineered" by inspecting Chromium's source code
when you search for it, you should find multiple blogs / tutorials that describe, how the REST API can be used
a good description on how to use it, can be found here
http://blog.travispayton.com/wp-content/uploads/2014/03/Google-Speech-API.pdf
(with regard to the description in the PDF: the mentioned "Speech API V1" is deactivated by now, so only the "Full-Duplex API" can be used)
But note, that you need an API key via Google's Developer Console (for the Speech API); and for that you need to be registered in the Chromium Development Group.
Also, using your own key, as of now, the Speech API it is limited to 50 transactions per day.

Using Google Map APIs

I am currently working on a personal project to develop a REST API which would perform tasks similar to what UBER, OLA like taxi aggregators do. Below is the brief about the functionality that I plan to add:
1)I have a fleet of cabs whose location is determined by its latitude and longitude.
2)A customer can call one of the cabs by providing their location and my API should assign the nearest cab available.
This I suppose would be accomplished by using Google Map APIs. My question is how do i start on using these APIs, to simulate such functionality?
You may use the following references:
Choose from the Google Maps APIs documentations depending on your needs. There are actually tutorials given within the documentations.
Answers to Frequently Asked Questions will also help especially the getting started part to fully understand how Google Maps APIs work.
Last but definitely not the least, this example in GitHub might help you exactly on the implementation.

What happened to the Watson visualize service?

I noticed the Watson API for Personality Insights that can visualize the profile in a cool D3.js chart has been deprecated. What are the plans to support this going forward?
Instead of an API, we have made it available as a client side library. Its available in the sample application code: https://github.com/watson-developer-cloud/personality-insights-nodejs .
"The visualize API is now deprecated and will be removed entirely in a future release. You can use the personality.js JavaScript file that is provided with the sample application to achieve similar results from the client. The textsummary.js JavaScript file provides additional formatting for the results of the service"