Get text from voice command with Alexa echo dot - echo

I am trying to create a new skill with Alexa Skill Kit and node-red. I created a web service and linked alexa to it. Now when I call the service via voice command with Alexa Echo dot I don't get the raw text (transcription of my voice). I only got the intents and slots...
Did anyone try that ? Did I miss something while configuring Alexa Skill ?

Amazon do not make the whole transcript available only the parts that match an intent and the extracted slots.
You used to be able to get this data but it was deprecated for US users and never made available to users in other regions.

Related

I need to create a Nestjs back-end service using data passed from Google Home

As I said in the title i need to create a service in my nest-js project, this service needs the data that the user verbally "says" to google home, so my question is: how does google home translates the voice in text? what kind of data arrives on my nestjs application? is it a text as i think or is it something else? i can't find any tutorial online, can you please paste some usefull link and/or explain to me what should i do?
long story short: i need to figure out how to take the data that arrives from a http get call from google home and use that datas for return a result based on said datas.
thank you for your help

give Google Assistant device commands programmatically

Is it possible to give Google Assistant commands programmatically? For example, I'd like to be able to send a command as text "turn on the fan" and have GA react as if that was the spoken command. I would also accept sending a JSON request in whatever format needed (with device IDs or whatever the API needs).
My situation is I have a ceiling fan that is controlled by Google Assistant. I want to be able to control it programmatically. For example, some event happens and my code wants to turn the fan on. Is there any way my code can tell GA to turn on the fan?
I tried using the Google Assistant SDK. I can send it text like "what time is it?" and get back text and audio, eg "It is 11:00am". However, I have a test device called "washer" and if I send text "is the washer running?" I get back "Sorry, I didn't understand". If I speak the words into my phone, I get back "The washer is running".
Why can't the GA SDK interact with my device? The credentials I give to the GA SDK are the same I use for my SmartHomeApp that defines the "washer" device.
To do this, you can setup up a virtual Assistant device and then send commands to it.
Check out Assistant Relay, which is a service that sets up a virtual Assistant device and exposes a REST API so you can send text commands to it, as if they were spoken.
Per the Documentation:
Simply send Assistant Relay any query you would normally send Google
Assistant, and Assistant Relay will call the Assistant SDK an execute
your command.
Per the problem you are having with the Google Assistant SDK, I believe what you are trying to achieve is only possible with a device, be it physical or virtual and not by using the SDK directly.
There are a lot of firewall and security issues allowing each smart device to to connect to the Internet. To alleviate this problem, Google's design methodology uses a fulfillment device as a bridge to connect to the device locally from one of their devices.
You are locally, on your smartphone, hooking into Google Assistant.
The phone is the fulfillment glue for the "washer" device.
According to this page:
Google Home or Google Nest device is required to perform the
fulfillment.
Due to the portable nature of cell phones, it does not make sense to allow one to be used as the fulfillment device remotely, hence the local hook.

Is it possible to retrieve the configured rooms/locations in the fulfillment service?

I have been experimenting with Google Smart Home and the protocol flow looks very clear for me. In summary:
action.devices.SYNC - sent by Google Smart Home to fulfillment service to find out the available devices
action.devices.EXECUTE - sent by Google Smart Home to fulfillment service to execute a certain action on a device
On the smartphone/tablet, the customer can place a device in a certain location. This allows him to ask questions such as Turn everything in my office off. Internally, Google Smart Home knows which devices are located in the office, and sends a action.devices.EXECUTE action for each device in the office subsequently, as explained above.
I am now wondering about the following: is it possible to retrieve the configured locations/rooms in the fulfillment service also? Is this information exposed and available to retrieve?
It is not possible to receive information about a user's home layout through the Home Graph API. When the user gives a command like "Turn everything in my office off", you may get several OnOff commands in your fulfillment, although you will have no way of knowing the original query.

How to use the full dialogflow API functionality (and get speech recognition accuracy) through google assistant/google home?

I have a dialog agent created through DialogFlow. I want to have a conversation with this agent on a Google Home device.
The problem:
The dialogflow API (ex. dialogflow-nodejs-client-v2) gives full access to agents built in DialogFlow. Most importantly, users can interact with the system either through text input or speech input (as a .wav file or an audio stream). When you send a request to the DialogFlow agent (ex. detect intent from audio), it returns this happy response object, which crucially includes a "speechRecognitionConfidence".
But! When interacting with the dialogue agent through a GoogleAssistant App, the request object sent to a webhook is missing the "speechRecognitionConfidence" value. This means that:
I don't have the input audio
I don't have the ASR confidence
Questions:
Is it possible to send the ASR confidence (and any other useful info) to a webhook?
Is there another way to access the ASR confidence (ie by making an API call)?
Is there a way to run a program built using the dialogflow API on a Google Home (or through the google assistant)?
Thank you in advance for any help. I've been struggling through endless documentation without success.

Google Assistant on Raspberry Pi. Can I use voice commands with Home Assistant?

How can I use Google Assistant on Raspberry Pi 3 with Home Assistant (or any other home automation system)? I cannot find how to use voice commands to manage lights, tv, garage door, etc....
Can someone help?
Thank you.
Do you already have google assistant configured on a RPi or are you looking for how to do this ? Or is the question around creating skills ?
If you are looking to configure RPi, you need to go Google Assistant's SDK site - https://developers.google.com/assistant/sdk/prototype/getting-started-pi-python/
If it's the latter, you may want to check the google actions
https://developers.google.com/actions/
If you are a bit into IOT you can use an ESP8266 with the Google Assistant to control almost anything in your house: lights, pumps, garage door etc etc. You only need to replace the LED's by Relays.
You can find the complete story here:
https://lucstechblog.blogspot.nl/2017/05/google-home-and-esp8266.html
Luc
You have to connect your home assistant project to google assistant. To do this you need the gactions CLI, create a project in https://console.actions.google.com, link your account with the project, edit your configuration.yaml as follows:
google_assistant:
project_id: someproject-2d0b8
client_id: [long URL safe random string]
access_token: [a different long URL safe random string]
agent_user_id: [a string to identify user]
api_key: [a Homegraph API Key generated for the Google Actions project]
exposed_domains:
- switch
- light
- group
Finally, you'll need to open your Google Assistant app ( IN YOUR PHONE ) and go into Settings > Home Control, click on + and add your App created in Google actions to interact with your devices by using the Google assistant.
Here you can see the setup process:
https://home-assistant.io/components/google_assistant/
It works in my case.