Responding in a language other than the user language (Actions on Google) - actions-on-google

I am working on a bilingual application targeting Google Home, and the program needs to be able to correctly enunciate responses in a language other than English, even when the user request is in English.
I cannot find an API flag to set the TTS language for individual responses. Is there any mechanism for this?

Not yet, although there are hints about how it might be done in the future. (To be clear - there is no guarantee they will support it this way, or support such a feature at all.)
SSML supports the <voice> tag, which includes a languages attribute. Although Google's SSML documentation does not mention it, the <voice> tag is available, and some attributes (but not the languages attribute) do work. Given this hidden feature, it seems possible that multi-lingual support may be handled this way in the future.
In the meantime, you may wish to use the SSML <audio> tag to play a pre-recorded or otherwise generated clip.
Note that this doesn't address input in a different language than the locale the user has set.

Related

IBM Watson Assistant for non-English language - Intent is not recognized

I am working with IBM Watson Asistant for Korean and found the failure rate to detect the correct intent is so high. Therefore, I decided to check language support and I can see the important missing features that is Entity Fuzzy Matching:
Partial match - With partial matching, the feature automatically suggests substring-based synonyms present in the user-defined entities, and assigns a lower confidence score as compared to the exact entity match.
This result in the chatbot that is not very intelligent for which we need to provide synonyms for each word. Check out the example below where Watson Assistant in English can detect an intent from words that is not included in the example by any means. I tested and found it is not possible for Korean language to do so.
I wonder If I understood something wrong or there is away to workaround this issue that I do not know of?
By default, you start with IBM Watson Assistant and an untrained dialog. You can significantly improve the understood intents and entities by providing more examples and then using the dashboard to tag correctly understood conversations and to change incorrect intents / entities to the right ones. This is the preferred way and is just part of the regular development process which includes training the model.
Another method, this time as workaround, is to preprocess a dialog using Watson Natural Language Understanding which has Korean support, too.
BTW: I use German language for some of my bots and it requires training for some scenarios.
In addition to Henrik's answer, here are couple of tips while creating an intent
Provide at least five examples for each intent.
Always re-train your system
If the system does not recognize the correct intent, you can correct
it. To correct the recognized intent, select the displayed intent and
then select the correct intent from the list. After your correction is
submitted, the system automatically retrains itself to incorporate the
new data.
Remember, The Watson Assistant service scores each intent’s confidence independently, not in relation to other intents.
Avoid conflicts and if there are any resolve the conflicts - The Watson Assistant application detects a conflict when two or more intent examples in separate intents are so similar that Watson Assistant is confused as to which intent to use.

Using babel / browserslist to give *not* supported fallback messages

Is there any hook or plugin that one can use as part of the Babel-verse (e.g., #babel/preset-env, using a plugin of Rollup with browserslist, as a hook within core-js, etc.) which will detect lack of support for the targeted features or lack of being present within the targeted browser range and allow one to hook into this information to redirect one's application to a generic not supported page (if not one customized to the specific lacking features)?
All of the efforts to minimize what is actually bundled (e.g., preset-env's useBuiltins) are less of a draw to take advantage of if there is no way to have some assurance that if one is too aggressive in excluding browsers, that such users will at least be alerted to the browsers which can support the composite application.

Is there an API for Safari Reader?

Does Safari Reader have an API which one can use to filter the text from a webpage (cleans adverts, unneeded parts of text etc.) for an iOS app?
If not, are there any alternatives?
just was doing some research for my app, heres what I've found. couldn't post all the links cause I'm new, but easily googleable
Read, Clear Read API: http://readapp.net/pub.html
Instapaper itself. Simple and Full API
Readability
RTCOOL
Feeds api
Boilerpipe
Goose
An overview of text extraction algorithms: http://www.readwriteweb.com/hack/2011/03/text-extraction.php
best of luck!
Nope. If you want access to the built-in one, you can file an enhancement request with the Apple bug reporter. There are also third-party services like Readability which, depending on the purpose of your app, you might be able to make use of.

How does Github do localization?

They seem to go against the advice
You may be tempted to store the chosen
locale in a session or a cookie. Do
not do so. The locale should be
transparent and a part of the URL.
from the official Rails I18n guide
I tried seeing if they set a cookie for the locale, and it seems like they don't.
So how do they do it and why they chose not to use URLs for different languages, like http://github.com/en/foo, http://github.com/fr/foo, etc. ?
It uses the _gh_sess cookie to store that information.
Multiple URLS are sometimes avoided because they create something looking a lot like duplicate content for search engines. This can deplete your Google Karma and lead to poor SEO performance.

Making CAPTCHA accessible to people with disabilities. What approaches have you used?

I'm nearing the completion of migrating our existing website to a CMS and I've just finished creating all the various contact forms. The CMS I'm using has CAPTCHA built into it's form builder, which is great, but the only method available is the "decipher-the-noisy-image" method.
This approach works well, but it limits access for people who might have reading or sight disabilities. I've worked around this by having a "help" page which allows those with disabilities to contact us by telephone and I'm considering having a single-field form which says "Send us your email address and we'll contact you". Accessibility is of particular importance to me as a web developer, but from an organisational perspective; so is reducing the amount of form spam we receive.
So what I'd like to know is, has anyone in the community had any experience with other CAPTCHA methods and how have you managed to make them accessible to people with disabilities?
As a blind person I find that recaptcha is one of the better CAPTCHA services out there as far as an audio option. The issue with using sms as the only alternative is the fact that many visually impaired users don't have cell phones that allow them to read text messages.
A good captcha, like reCAPTCHA, usually includes an audio CAPTCHA. Also I have seen a site that will
send a SMS message and you enter the code in the sms (Google-gmail will do this).
I am very interested in this because I am implementing a CAPTCHA in jQuery right now.
Many sites, including this one I believe, have an option to play noisy audio with embedded spoken numbers, as an audio equivalent to the traditional CAPTCHA image.
I find the result pretty spooky, actually. Reminds me of numbers stations.
As Michael said, audio with each character of the CAPTCHA text spoken for better or worse is a common option provided. If your CMS is PHP-based or if PHP is available on the hosting infrastructure you are using anyway, here's an open source CAPTCHA application with an audio download option:
http://www.phpcaptcha.org/
I've implemented a production site with phpcaptcha, and it works as advertised.