I want to distinguish my bot's Alexa and Google Home experience from text-based bots. Text-based bots support Rich Response types, but audio ones do not.
My problem is that I can't find a field in the Dialogflow V2beta1 API docs that specifies text source from audio. It looks like in V1 there was a message field that used a numeric enum to indicate this, but I can't find a V2Beta1 equivalent.
With Actions on Google, instead of checking the input type of the source query, you can check the surface capabilties, which will allow you to see whether the device has a screen or not.
Related
The introductory paragraph on the developer page for the MediaState trait implies that the MediaState trait lets you provide Assistant with information about the media item your device is currently playing... The examples it gives suggest it includes way to provide a user readable title for the media being played, as well as a URI to the actual media stream and information about the viewer's current position within the stream so that playback could be moved to another device.
However, looking at the actual Device STATES section of the page, all I see is a schema for telling Assistant whether the device is currently playing / paused / stopped / fast forwarding / etc... Nothing for providing information about the media item being played.
Did I miss something or has Google simply not fully implemented (or not fully documented) this trait?
The introductory paragraph on the developer page for the MediaState trait implies that the MediaState trait lets you provide Assistant with information about the media item your device is currently playing...
However, looking at the actual Device STATES section of the page, all I see is a schema for telling Assistant whether the device is currently playing / paused / stopped / fast forwarding / etc...
Apologies for any confusion created by the current documentation. The schemas presented in the trait reference are accurate in that MediaState currently only supports reporting the state of activity and playback controls in conjunction with the TransportControl trait.
The intent is to provide support for more descriptive media items in the future, but that is not currently part of the API. This is an area where we could use feedback on what types of metadata would best suit your use case, so I would recommend filing a feature request on the public tracker.
I am interested in building my own cards (since there isn't any such card that is currently available..?), similar to this question. However no one answered that question previously.
If not, how else can we achieve deep-linking to open other apps (like for example I want to get directions, I don't mind having to open up Google Maps to do so). But it only seems to work for Android, and it is still in Developer Preview..?
I also want to allow the user to click on a card / a button and call a mobile number, but url only takes in http / https URL schemes and not tel://, so that workaround can't work...
You can't build your own rich response types, they are internal features of the Dialogflow platform and the Assistant apps, you can only use them as far as Google exposes them via the APIs. You are not alone in wanting to have more advanced rich responses (I'd like to have free-form HTML cards), but waiting is all you can do here.
First time posting, so feel free to give me feedback if I could improve something about this post... Now on to my question.
I am currently developing a Google Action, the Action will allow the user to define important events, such as Bob's Birthday or Fred's Graduation, and save data about said events. Later, the user will be able to ask for info about the event and get it returned back to them.
I am using the Dialogflow API with "Inline Editor" fulfillment to keep it as simple as possible for right now. The problem I am running into is this, the event has an entity type of #sys.any, so anything the user says is excepted as valid input. I would like some way to bias towards events I already have stored for the user however, so that they are more likely to find the event they are looking for.
I found another answer on here discussing speech biasing (What is meant by speech bias and how to use speechBiasHints in google-actions appResponse) which defined speech biasing as the ability to"influence the speech to text recognition," which is exactly what I believe I want. While that answer provided sample code, it was for the Actions SDK, not the Dialogflow SDK, which I am using.
Can anyone provide an example of how to fill the "speechBiasingHints" section of the ExpectedInput response of the Conversation Webhook using the DialogFlow Webkook?
Note: This is for a student project, and I'm new to developing Google Actions and still very much learning about everything that is capable with Google Actions. Any feedback or suggestions are very welcome.
The question you link to does quite a few things differently than the approach you're taking. The Action SDK provides more low-level control, but doesn't have much Natural Language Processing (NLP) capabilities, which Dialogflow provides.
Dialogflow handles biasing a little differently through the use of Entities, so you don't need to control the speech biasing directly, Dialogflow can handle that for you, to some extent.
Since each user may have a different event name, you'll probably want to use a User Entity, which is an entity you define and then populate on a user-by-user basis through Dialogflow's API. In your sample phrases, you can then use this entity name instead of #sys:any, or create another set of phrases that use this entity in addition.
The StockTwits API documentation describes steams in a way that sounds like static search results, for example streams/symbol:
This allows an API application to search for a symbol or user. 30 Results will be a
combined list of symbols and users.
This seems similar to search/symbols:
This allows an API application to search for a symbol directly. 30 Results will return
only ticker symbols.
Other than the fact that search excludes users, I don't see the difference.
In contrast, the Twitter API provides methods to request a continuous stream of tweets, which I have gotten to provide tens of thousands of tweets in a few days.
Is it possible to have StockTwit pump tweets continuously, similar to Twitter?
If so, what is required? Since StockTwit streaming looks like searching to me, the only option I have seen is to submit repeated search requests, but that would exhaust the rate limit.
I prefer C#, but I am glad to study answers in other languages, such as PHP.
This is a static search for symbols or both symbols and users as a combined search. This isn't a streaming search endpoint for filtering content. This is strictly for use for finding a symbol or a user to go directly to the stream.
We are looking into offering streaming endpoints and search would be part of this offering.
You may be interested in using streamdata.io which allows to stream any APIs. We have already implemented a StockTwits demo, which can be found here and explanations can be found in this blog post.
I think it's quite easy to transpose what has been done with Android to the C# world. All you need is an EventSource library and a JSON-Patch library.
MapKit doesn't natively support local search results, so I'm looking for a way to get a list of local pizzerias (or coffee shops, or a specific retailer) via some http api call.
The default google maps api requires javascript, so it's not clear to me how to integrate this into an iPhone app (without displaying a UIWebView).
I have found that a url in a format such as this:
http://maps.google.com/maps?output=json&q=pizza&near=37.3,-122&num=10
Does return a JSON-like list of results, but my usual friendly JSON parser, json-framework, barfs when it tries to parse this (even if I do clever-sounding things like leaving out the "while(1);" at the start of the reply). I'm also not sure how legitimate this URL is to use for this purpose.
I'm on the same quest. It seems that one option would be to perform the local search using Google's AJAX Search API, then plug that data into the mapkit.
That said, it's not entirely clear to me yet that this approach is in the clear vis a vis google's terms of service. Let's see here. Alright, changed my mind because of this. It's a post on google's own ajax api blog including video of a native iPhone app. Looks like this is the approved solution.