Custom commands for Google Assistant SDK - raspberry-pi

I've got a raspberry pi running the Google Assistant SDK, and it's working amazingly so far. I'm just wondering how I could make custom commands for the assistant, that would then trigger bash commands on the pi.
Any help would be greatly appreciated.

You can add your own functions, call external commands, etc. using the pattern in assistant_library_with_local_commands_demo.py from the aiyprojects-raspbian project on GitHub. Here is a commit where I add my own custom local commands to Google Assistant.
You do have to jump through the hoops to use the Cloud Speech API, but it's still using the Google Assistant. You don't have to use "actions on Google" stuff described by #Ayoub above.
Note: If you fail to include the assistant.stop_conversation() as I first did,
you get a weird response with 2 voices talking to you.

As far as i know what you are looking for is more complicated than that.
the assistant does not have direct access to its environment where it's installed.
So if it's on your phone you cannot just run something on the phone directly.
what you're looking for is to create an action on google:
https://console.actions.google.com
the action on google that you will create will be triggered with your command on the assistant then it will it self trigger a webhook (function running in the cloud) hosted possibly in your pi (if you have a web server that you can access publicly) and then from there you can run whatever script you are talking about.
i have done that using my:
google home ==> actions on google ==> api.ai ==> raspberrypi ==> run action
feel free to ask if you have any thing unclear.

Related

Is KubeFlow still supported on GCP?

I am trying to use KubeFlow on GCP and I am following this codelab, but "click-to-deploy" is no longer supported so I followed the documentation of "kubectl and kpt". However, I keep getting this "You cannot perform this action because the Cloud SDK component manager is disabled for this installation." error and none of the solutions I found worked. I have 2 other friends told me they tried to make KubeFlow work since last year, it never worked, but I did see people post question about KubeFlow on Stackoverflow still, so I want to ask if it is still working, if so, where can I find a decent guide to follow?
Thanks!
I finally got it working. For that error message, it turned out that I just didn't install the Cloud SDK properly. There will be a lot of other issues too down the road, but at least the KubeFlow web UI is working for me now.
yes, as the kubectl and kpt says, the first step in getting prepared to install cluster is installing gcloud that is CLI that manages authentication, local configuration, developer workflow, interactions with Google Cloud APIs.
Without is you simply cant work with objects(in your case you need to enable kpt anthoscli beta) and perform tasks like
creating a Compute Engine VM instance, managing a Google Kubernetes
Engine cluster, and deploying an App Engine application, either from
the command line or in scripts and other automations..

How to connect the back end and the front end and use the Discovery API in IBM cloud app services?

I am very new to using APIs so please excuse me. I am currently using a Python-Django App service from IBM cloud app services and the IBM Watson Discovery resouce. I have followed all the steps given here:
https://console.bluemix.net/docs/apps/tutorials/tutorial_web.html#before-you-begin
I have a machine that has docker and so the app got built successfully. However I am lost as to how I am supposed to get the front end ( which I am writing in bootstrap, javascript ) to connect to the backend and link the API.
EDIT
For example : I want my app to accept documents, feed them in Discovery, extract the keywords and sentiments and display them in the UI. How do I know what to access from the server side code and what to link where in the UI.
It is a very broad question but its a compulsory project I need to do and I am clueless. Pleaassee Help !
Before you try to integrate an API, you will need to be familiar with Python and Django. If that is not the case, then you really need to go through a series of tutorials.
Then before deploying to the cloud, you will be better off running your Django app locally on your laptop. Use pip to install the watson-developer-cloud pypi module and use the API documentation to build the python code in your Django application - https://www.ibm.com/watson/developercloud/discovery/api/v1/python.html?python#query
If none of this makes sense, then you need to brush up on your knowledge of Python, Pip, and Django.
When you have the app running on your machine, then you will be ready to package it up into either a docker image or cloud foundry container and deploy to the cloud.

IoT using Google Cloud Service IoT solutions (Weave): How to connect Raspberry Pi and lighting the LED?

I am trying to connect my raspberri pi with Google IoT Cloud solutions using Weave. I have done it already using AWS and IBM Bluemix, but could not find a way to do the same using Google Cloud. As per their documentation, it seems that some of the fies have been deprecated or not been updated.
Moreover, they have been written in C language and I am not much of a C guy. I used Python for both the IBM Bluemix and AWS to connect my Pi to IoT and then establish the subscriber and exchange messages using MQTT gateway.
Can anyone suggest anything regarding this?
Google Weave getting started
To be more specific, certain packages which I saw in error logs while installing the below step:
make -C examples/host/light
it showed in logs the message like
could not find lldap
could not find llssh2
Even after installing them in my developer machine.
Due to error above, the below command
./out/host/examples/light/light
is not executed as the location
/out/host/examples/light/light
is not created by the above make command. Any suggestions for this?
You might want to try instead to use the new Google Cloud IoT Core product instead of Weave - full disclosure, I worked on it. It's currently in public beta and enables the scenarios you're trying to address. You should be able to use MQTT to communicate to/from your device.
There's a high-level overview of the platform on YouTube as well as an industrial applications focused talk from Google I/O.

Developing with Azure Mobile Services?

What is currently the "best" way to develop a back-end system in Azure Mobile Services?
Specifically, what tools are available? From what I've seen, most examples just go to the Management portal and manually add a few lines into the script window. This is worse than using just Notepad, and doesn't have any concept of version control...
Is there any way to make a project in VS 2012 that contains all the Node.js code that will run in the Azure Mobile service? Is there a way of fully running that code on a local development environment that mimics the Mobile Services?
I need to have server-side code with much more complexity than is shown in most of the Mobile Services samples or documentation that I've been able to find.
I have a web site, and a Win 8 Store App that need to authenticate against, and access relatively complex data structures from a back-end database. The solution being pushed right now all seem to include Mobile Services at the center of it, using simple REST against raw tables, but all the examples are too simple to be useful.
Can someone point me to a "real-life" sample of using Mobile Services, and a "mature" way of developing and testing such a system using the tools in Visual Studio?
Thanks.
Why you have no other option than the Management portal is really beyond me. It seems very awkward for a C#/.NET developer to go back to Notepad style programming with console.log() debugging.
What I would love to see is some Node.js entry points that you could connect to a regular C# assembly which could fulfill the request (as in ASP.NET MVC or Web API) having the full .NET Framework at your disposal.
What I could see as a possible architecture is to have:
ASP.NET MVC hosted on Azure
--- writes processed data with logic to --->
Azure SQL DB <--- reads from --- Azure Mobile Services ---- bridge to ---> Mobile devices
Or
Cloud Worker Role on Azure ---- crunching/processing ----> Azure SQL DB <---- reading/writing raw data ---- Azure Mobile Services ---- bridge to ---> Mobile devices
You can use the Mobile Services facility for mobile devices facilities, scheduling and push notifications with limited code and do most of the coding in a managed .NET environment.
The AMS (Azure Mobile Services) along with Azure has advanced dramatically since this post was written and the replied answers.
Some of this stuff still holds true. If you have a ton of node.js written not in the Azure cloud portal, you will want to copy and paste to the portal online, custom api calls section and even perhaps sql backend tables for CRUD operations.
The hope for C# developers is that it is NOW in preview mode in which YOU CAN skip node.js and build everything without node.js very shortly... Some bugs to work out, but in 6 months this will be fairly solid.
I had questions and issue and a guy named Carlos carlosfigueira was very helpful.
Azure Mobile Services - Getting more user information
Josh covers unit testing server-scripts here: http://www.thejoyofcode.com/Unit_testing_Mobile_Services_scripts_Day_7_.aspx
In this tutorial, he uses the Mocha testing framework for JS (id TDD mode) and walks through an example for testing an INSERT script that encrypts the value of a particular property (text) and a read script that decrypts it (value is encrypted at rest in SQL db).
You can also find aggregation of links and tutorials here.
I would suggest that you build this solution using Windows Azure Mobile solutions especially it supports the Node JS NPM right now, which means you can create the API you want on the Windows Azure using the Node JS NPM and can work with it using WAMS easily. have a look on the following link it will help you understand what I want to say more.
http://weblogs.asp.net/scottgu/archive/2013/06/14/windows-azure-major-updates-for-mobile-backend-development.aspx
For the Client I also suggest that you build it using SignalR which is designed for cases such yours where real time applications require a lot of transactions from the server side.
http://www.asp.net/signalr
you can also find more details about how you can integrate both of them in the following link: http://hhaggan.wordpress.com/2013/07/12/signalr-node-js/
I hope these help you, let me know if you need anything else.
For running locally, the mobile service has the same Kudu environment available in azure websites, so you can browse to https://your_service_name.scm.azure-mobile.net If you navigate to the Debug Console from the top nav, you can download everything running in the site/wwwroot folder.
You can run this nodejs project locally (On windows only if you require the SQL Server npm package). Your code is in App_Data/config/scripts. If you replace the downloaded content with your current local git working copy, you can develop and debug locally, and then push changes as usual.
Tools I use:
Eclipse with JS environment (or any nodejs IDE).
Git
Postman
Steps:
Enable source control to your azure mobile service.
Pull to your local and create a eclipse project with the source.
Make changes and push.
Test with POSTman
This procedure allows me to develop really fast and eclipse tell me the common JS errors. But it has obvious downside:
No debugging (I use console.log)
The project ended up with a lot of commits (its hard to use git for proper source control)
I just did a blog post on running Azure Mobile Services locally: http://www.mikelanzetta.com/2014/09/running-azure-mobile-services-locally/ - basically it interrogates the API and starts up express, and allows you to run mocha yourself locally. It's a bit cleaner than pulling down the full wwwroot from the scm link, and I found using my local runner as a git submodule made it easy to work with (and easy for me to use VSO for managing my tests).
Anyway, for actual development, I use the Git integration and WebStorm - it automatically figures out the tasks in my local Gruntfile and makes it easy to run and test. For once it's deployed, Postman is helpful.

how to use the API calls in bonjour mDNSPosix

I have installed apple mDNSreponder on linux and able to publish the service via command line
$ dns-sd -P Stack Overflow _ftp._tcp. . 80 AIR 14.99.8.77
Now I want to know how to use the API call of this in my app to publish the same service .
When I compiled the bonjour source code I got the two libraries libdns_sd.so libnss_mdns-0.2.so
Can anyone suggest me how to call apis using my linux c code ..
Link libdns_sd.so to your project and include dns_sd.h too. Now follow dns_sd.h file to see various features like registering service, browsing, etc.