How do I use IBM Watson's Visual Recognition service with Dart in Flutter? - flutter

I want to use IBM Watson's recognition service, specifically their waste identifier as shown below. https://developer.ibm.com/technologies/artificial-intelligence/patterns/recycle-with-watson/
It only talks about using it in an iOS application but I want to use it with Dart in Flutter. I am not too clear how to do this so if someone can teach me some of the basics of this that would be great. Btw, I do not want to train a IBM visual recog. model, I want to use the classifier that IBM provides as shown in the link above.

I suggest to start with the Getting Started which shows how to provision the Visual Recognition service. The next step is to either reuse their server application which includes loading the training data or to explore writing your own. See the docs for links to either the API (works with every programming language) or SDKs for some more common programming languages.

There is easy-to-follow API doc to help you to get started with Visual Recognition service.
Flutter has a very good documentation on how to make API calls in their documentation
Not recommended but to help you with the basics - There is an old Flutter package (last updated in 2018) that has Dart code samples to get you started.

Related

How to use OpenTripPlanner on a web application?

I am new to open trip planner (and OpenStreetMap too), and I would like to use it in a web application, where I would let the user choose preferred options (like travel mode) and even use tags to create a personal route.
Following the tutorial Basic Usage, I've run the jar file and now I have an instance of OTP running on localhost correctly.
Now, how can I integrate it on a web app and let the user use it? I couldn't find any tutorial about that. Also, I have some other doubts:
I've downloaded GTFS for Venice, but what do I have to do if I wanted to work with multiple locations?
Since I have to download also OpenStreetMap data for the same region as the GTFS file (as explained in the tutorial above), again, how it is possible to integrate all the files to, let's say, visualize the roads and create travels on an entire nation?
How can I use OSM tags to personalize journeys?
I know this is a lot, but I really don't know where to start. Any help, tutorial or guide link would be truly appreciated.
OTP comes with a default web application that is available after starting the server if you go to localhost:8080. This is explained in the Basic Usage section of the docs.
As for the rest of your question, I'd recommend looking at the Configuration section of the docs.

Do I have to use API.AI to create an action for Google Home?

I have some experience building chat and voice agents for other platforms, but I’m not using API.AI to understand natural language and parse intents. Do I have to replace my existing solution with API.AI?
Not at all. The advantages of using API.AI in creating a Conversation Action include Natural Language Understanding and grammar expansion, form filling, intent matching, and more.
That said, the Actions on Google platform includes a CLI, client library, and Web Simulator, all of which can be used to develop an Action entirely independent of API.AI. To do this you’ll need to build your own Action Package, which describes your Action and expected user grammars, and an endpoint to serve Assistant’s requests and provide responses to your users queries. The CLI can be used to deploy your Action Package directly to Google, and you can host your endpoint on any hosting service you wish. Google recommends App Engine on Google Cloud Platform.
I found this explanation from the official page most helpful.
API.AI
Use this option for most use cases. Understanding and parsing natural, human language is a very hard task, and API.AI does all that for you. API.AI also wraps the functionality of the Actions SDK into an easy-to-use web IDE that has conveniences such as generating and deploys action packages for you.
It also lets you build conversational experiences once and deploy to many other platforms other than Actions on Google.
ACTIONS SDK
Use this option if you have simple actions that have very short conversations with limited user input variability. These type of actions typically don't require robust language understanding and typically accomplish one quick use case.
In addition, if you already have an NLU that you want to use and just want to receive raw text and pass it to your own NLU, you will also need to use the Actions SDK.
Finally, the Actions SDK doesn't provide modern conveniences of an IDE, so you have to manually create action packages with a text editor and deploy them to your Google Developer project with a command-line utility.
Google is pushing aggressively everybody to API.AI. The only SDK they have (Node.js) no longer supports expected events for instance. Of course, you don't need to rely on their SDK (you can talk to the API directly) but they may change the API too. So proceed with caution.

WebkitSpeechRecognition Architecture

I know WebkitSpeechRecognition is only available on the chromium browser. However I am wondering how it converts the voice into text?
I tried to monitor the network log from developer console on the Google Chrome and I don't see any network activity. I thought I would send API request to the Google but I really don't.
I cannot find any architectural document on this either.
Does any one has any idea?
to my knowledge, there is no official documentation for the Google Speech API that is used in Chromium, but it has been "reversed engineered" by inspecting Chromium's source code
when you search for it, you should find multiple blogs / tutorials that describe, how the REST API can be used
a good description on how to use it, can be found here
http://blog.travispayton.com/wp-content/uploads/2014/03/Google-Speech-API.pdf
(with regard to the description in the PDF: the mentioned "Speech API V1" is deactivated by now, so only the "Full-Duplex API" can be used)
But note, that you need an API key via Google's Developer Console (for the Speech API); and for that you need to be registered in the Chromium Development Group.
Also, using your own key, as of now, the Speech API it is limited to 50 transactions per day.

Watson conversation - How to deploy?

I've created and trained a basic dialog and it's now ready to be used in my web site.
I can't found any docs to deploy and use the application.
Can anyone help me ?
You need to build a front end application that allows users to interact with the conversation service.
There are some generated SDK's for various programming languages that can help you in doing this.

What happened to the Watson visualize service?

I noticed the Watson API for Personality Insights that can visualize the profile in a cool D3.js chart has been deprecated. What are the plans to support this going forward?
Instead of an API, we have made it available as a client side library. Its available in the sample application code: https://github.com/watson-developer-cloud/personality-insights-nodejs .
"The visualize API is now deprecated and will be removed entirely in a future release. You can use the personality.js JavaScript file that is provided with the sample application to achieve similar results from the client. The textsummary.js JavaScript file provides additional formatting for the results of the service"