The introductory paragraph on the developer page for the MediaState trait implies that the MediaState trait lets you provide Assistant with information about the media item your device is currently playing... The examples it gives suggest it includes way to provide a user readable title for the media being played, as well as a URI to the actual media stream and information about the viewer's current position within the stream so that playback could be moved to another device.
However, looking at the actual Device STATES section of the page, all I see is a schema for telling Assistant whether the device is currently playing / paused / stopped / fast forwarding / etc... Nothing for providing information about the media item being played.
Did I miss something or has Google simply not fully implemented (or not fully documented) this trait?
The introductory paragraph on the developer page for the MediaState trait implies that the MediaState trait lets you provide Assistant with information about the media item your device is currently playing...
However, looking at the actual Device STATES section of the page, all I see is a schema for telling Assistant whether the device is currently playing / paused / stopped / fast forwarding / etc...
Apologies for any confusion created by the current documentation. The schemas presented in the trait reference are accurate in that MediaState currently only supports reporting the state of activity and playback controls in conjunction with the TransportControl trait.
The intent is to provide support for more descriptive media items in the future, but that is not currently part of the API. This is an area where we could use feedback on what types of metadata would best suit your use case, so I would recommend filing a feature request on the public tracker.
Related
I'm creating a type of ride-sharing application for iOS using Swift and Firebase Functions and would like to implement the following workflow:
Passenger requests ride from specific driver
Driver has 2 options
a. Driver accepts and the passengers card is charged
b. Driver declines and thats it
I've gone through pages and pages of Stripes documentation and Github to find the best example to go off, but can't seem to find one that fits what I'm after.
You can find an example here: https://stripe.com/docs/connect/collect-then-transfer-guide
There's also https://rocketrides.io/ which is a complete example, including code, of a ride sharing app.
First time posting, so feel free to give me feedback if I could improve something about this post... Now on to my question.
I am currently developing a Google Action, the Action will allow the user to define important events, such as Bob's Birthday or Fred's Graduation, and save data about said events. Later, the user will be able to ask for info about the event and get it returned back to them.
I am using the Dialogflow API with "Inline Editor" fulfillment to keep it as simple as possible for right now. The problem I am running into is this, the event has an entity type of #sys.any, so anything the user says is excepted as valid input. I would like some way to bias towards events I already have stored for the user however, so that they are more likely to find the event they are looking for.
I found another answer on here discussing speech biasing (What is meant by speech bias and how to use speechBiasHints in google-actions appResponse) which defined speech biasing as the ability to"influence the speech to text recognition," which is exactly what I believe I want. While that answer provided sample code, it was for the Actions SDK, not the Dialogflow SDK, which I am using.
Can anyone provide an example of how to fill the "speechBiasingHints" section of the ExpectedInput response of the Conversation Webhook using the DialogFlow Webkook?
Note: This is for a student project, and I'm new to developing Google Actions and still very much learning about everything that is capable with Google Actions. Any feedback or suggestions are very welcome.
The question you link to does quite a few things differently than the approach you're taking. The Action SDK provides more low-level control, but doesn't have much Natural Language Processing (NLP) capabilities, which Dialogflow provides.
Dialogflow handles biasing a little differently through the use of Entities, so you don't need to control the speech biasing directly, Dialogflow can handle that for you, to some extent.
Since each user may have a different event name, you'll probably want to use a User Entity, which is an entity you define and then populate on a user-by-user basis through Dialogflow's API. In your sample phrases, you can then use this entity name instead of #sys:any, or create another set of phrases that use this entity in addition.
I want to distinguish my bot's Alexa and Google Home experience from text-based bots. Text-based bots support Rich Response types, but audio ones do not.
My problem is that I can't find a field in the Dialogflow V2beta1 API docs that specifies text source from audio. It looks like in V1 there was a message field that used a numeric enum to indicate this, but I can't find a V2Beta1 equivalent.
With Actions on Google, instead of checking the input type of the source query, you can check the surface capabilties, which will allow you to see whether the device has a screen or not.
I'm trying to follow this guide: https://console.bluemix.net/docs/services/IoT/GA_information_management/ga_im_index_scenario.html#scenario
But as soon as I hit "Manage Schemas" in the device type section I get an "Internal Error", saying I should contact the Admin... I'm not able to create schemas. What's going wrong?
Thanks in advance!
Tom
It is not entirely clear what you are trying to achieve. If you are simply trying to retrieve the raw events that have been published by your device, then you need to use a URL for the form:
/api/v0002/device/types/{deviceType}/devices/{deviceId}/events/{eventName}
This is documented in the Watson IoT Platform API reference.
It is worth noting that, if this is all you are trying to achieve, you do not need to follow the guide that you referenced. It is possible to retrieve the raw events using the REST API simply by defining the Device Type and registering your device.
The guide that you referenced describes the Data Management capabilities of the Watson IoT Platform. These capabilities allow you to process the raw events in order to generate/compute state for the device. This is more involved than simply retrieving the raw events because you need to configure schemas for the events and the state and then define the mappings that tell the platform how to compute the properties on the state when an event is received. The computed state for a device is a different resource and needs to be retrieved using a different URL:
GET /api/v0002/device/types/{typeId}/devices/{deviceId}/state/{logicalInterfaceId}
This is documented here
It's a little confusing, but that Manage Schemas section of the UI is not related to the feature you're looking at as part of the guide you referred to.
The guide you're looking at outlines how to configure event schemas and logical interface schemas for a Device Type using REST API calls. If you wish to create this configuration using the web UI, this is possible too but you need to get to the Interfaces section from the Device Types view: see this image
In this case, I clicked on the Humidity Sensor Device Type and then, in the expanded view, clicked on the Interface tab. From there you can use the Simple or Advanced flows to create the configuration.
The reason for the error is because the component that provides the function on that page (Real time insights) is not present in the eu-de region. The page should not be present but for some reason is.
If you are planning on following that guide then this is a different part of the UI from the “manage schemas” page, and is located under the “interfaces” section in “device types”. The function defined in that guide is available in eu-de.
I am doing an android application, I'd like to know how to connect sensor devices/applications to Bluemix IoTF using API keys, by saying that I just want to minimize the registration task from client side(sensor devices)as much as possible. I know how to register devices with deivce Id,token and authentication manually. but I just like to know that is there any other easy way around to do it. It would be great If I got some one shed light on this from scratch. Thanks in advance.
There is a rich set of REST based APIs available at:
https://docs.internetofthings.ibmcloud.com/devices/api.html
and fully documented here:
https://docs.internetofthings.ibmcloud.com/swagger/v0002.html#/
One can use excellent REST based testing tools such as Postman for REST testing.
The reason I mention the REST APIs is that they provide a way for scripting or automating the registration of devices. There is an API called "Add device" that, when called, will register a new device instance of a specific device type against your IoT Foundation instance.
I could imagine a new device that knows it is not registered executing a self registration request to define itself as a new device type. What I would next suggest is that you read the links above and see if they make sense. If they answer your question fully, great. If not, simply post a new question that is specifically targeted at a specific areas and we'll be watching this set of tags and respond back as quickly as we can.