Can I specify Speech text for a Permission context in DialogFlow? - actions-on-google

Can I specify Speech text for a Permission context in DialogFlow?
The samples and spec usually show:
return conv.ask(new Permission({ context: 'text to read', permissions: 'name of permission' }));
However, the 'text' that I want to display has accented characters in it that mess up the TTS engine. In the regular SimpleResponse, the text and speech can be separated:
conv.ask(new SimpleResponse({
speech: '<speak>Go right</speak>',
text: 'Go -->'
}));
Is there any way to specify the Speech for a Permission?

The only content that can go along with a Permission is the plaintext context property that you provide in the Permission constructor.
If it isn't working well, you may want to reconsider your design, as the context parameter is not meant to contain a lot of text.

Related

Google action builder/Google assitant How to use proper noun as type

i would like to have a type that represent proper nom, such as family name for exemple but i can't find anything on this.
My goal is to send a info to an other personne using google assistant and my backend.
For exemple the user can say "Send this info to john smith" the info is stored in my backend so i have no problem finding and i got the id of the personne who is talking to the google assistant so this is no problem either.
The problem is how can i get john smith as a parameter that i send to my webhook? So my backend can verify the user list in my database and send the info if the user existe. I tried to use Type but a family doesn't match any pattern because it's can be anything...
If anyone know how to use google action builder with proper noun i would be grateful to know how i can manage to do it.
You have generally two options.
Free form text approach
First you can create a "Free form text" type which can catch pretty much anything being said.
Then a custom intent can be trained with a few examples to pull out the correct proper noun (or anything else). Your webhook will be able to match at that point.
Type Overrides approach
Alternatively, you can create a new type that starts with a preset of sample names that you use in your custom intent. Then, when the action starts, you can get the user's personal contact list in the webhook and set session type overrides.
Here's an example of the code I got from a music player action:
conv.session.typeOverrides = [{
name: 'genre',
mode: Mode.TypeReplace,
synonym: {
entries: Array.from(trackGenres).map(genre => ({
name: genre,
synonyms: [genre]
}))
}
}]
Depending on your system architecture, one of these may make more sense than the other. The first is better at capturing everything, but may require more post-processing on your webhook. The latter is better at precision, but may mean names may not match if they don't match entirely.

How to inject/track editor/document meta-data?

Is there a way to inject custom properties into a document/editor?
For example, I need to edit text from an api endpoint. It's easy to make the api call and display the data in an editor, then edit the text. I cannot seem to find a good way to put the meta-data about the text so a post can be made to update the source. Need to hold information like api-end-point and document id without injecting it into the main editor text.
I have been looking into everything from a CustomDocument/provider to a custome file system provider, but those options seem to be rather complicated for what I need.
Example:
api-endpoint: GET /docs
const resp = [{
name: /docs/note1.txt,
id: 12345,
content: 'some text document content'
}, {
name: /notes/othernote.txt
id: 54312,
content: 'special text in another note'
}];
// open a document/editor and display the content of one of the docs from the api reponse
await workspace.openTextDocument({ content: resp[0].content, language: 'text' })
.then( async (doc) => {
await window.showTextDocument( doc, { preview: false });
this.documents.push(doc);
});
Now we have an editor displaying the content but no way to link that content back to the api endpoint (with doc id). It seems I need to be looking the file system provider so I can inject additional details in to the file stats.
https://code.visualstudio.com/api/references/vscode-api#FileSystemProvider
Suggestions?

Image donot show in rich response in actions on Google

Images donot show in basic card in rich response
I have provided the url of image but it didn't show an image
#Prisoner here is my code, please let me know if i am doing any mistake
app.intent('totalResponses', (conv, { Location }) => {
// extract the num parameter as a local string variable
if (!conv.surface.capabilities.has('actions.capability.SCREEN_OUTPUT')) {
conv.ask('Sorry, try this on a screen device or select the ' +
'phone surface in the simulator.');
return;
}
conv.ask('Hello World');
conv.ask(new BasicCard({
text: `Hello`, // Note the two spaces before '\n' required for
// a line break to be rendered in the card.
title: 'Title: this is a title',
image: new Image({
url: 'https://drive.google.com/file/d/13eEr2rYhSCEDKDwCLab29AqFMsKuOi4P/view',
alt: 'Image alternate text',
}),
}));
});
The problem is with the drive URL that you're using for the image. This URL is the one that is used to preview the image when you load it from Google Drive directly. It is an HTML page, rather than an image, so it won't display if you use it in a web page or in a card for the assistant.
To get the URL you need to use, you want to select the three dots at the top of that page and then "Embed item". You don't want to use the entire embed code - just the URL.
Follow the below steps for a solution:
Get a shareable link URL from drive.
Open the terminal(UNIX) and execute this command. Can also use online unix based or local terminal
https://www.tutorialspoint.com/execute_bash_online.php
Execute the following code
echo <paste your link> | cut -d '/' -f 6
Copy the output which is a unique id.
Paste it after &id=:
https://drive.google.com/uc?export=view&id=<paste the output unique key>

How do I use audio for NoInputPrompts?

I'm trying to use a custom sound when Google Home receives no input from the user. But it seems to ignore any SSML data for no input reprompts converting to plain text. This is what my data looks like in my fulfilment code:
response.data.google = { expect_user_response: true,
no_input_prompts: [
{ ssml: 'This is <say-as interpret-as="characters">SSML</say-as> with a <break time="3s"/> pause.' },
{ ssml: '<audio src="https://...myurl.mp3" />' },
{ ssml: '<speak><audio src="https://..myurl.mp3" /></speak>' }
] }
The first reprompt is stripped of SSML so Google Home just says
This is SSML with a pause
(but with no break). The second two reprompts are stripped to just silence!
Does Google not support SSML on reprompts despite the SSML property being available?
It seems you need to wrap the string in a <speak> tag to get it to register as valid SSML.
The reason the 3rd one wasn't working is that Google Home seems to override it and instead tell the user it's about to exit the skill. This may be a bug.

What is a "slug"?

I am attempting to upload some emoticons to a particular site. The mandatory fields are:
Shortcut:
Image:
Category:
Suggested Category:
For the image, I just choose it from my file. Category fields are explanatory. I was trying to enter a web url as the "shortcut", and get this error:
"Enter a valid 'slug' consisting of letters, numbers, underscores or hyphens"
I need to know what a "slug" is and how to create/get one.
A slug is a human-readable portion of a URL, which identifies it to the viewer. Here is the explanation from the Wikis:
Some systems define a slug as the part of a URL which identifies a page using human-readable keywords.
For example, the slug for this post is: "what-is-a-slug".
They are probably asking for a description, such as "a-description-of-my-awesome-new-emoticon" or "confused-face".
This is an internal identifier that will be used in url's.
For example http://url.com/category-slug/my-post