I wrote a smart home skill for Alexa using AWS lambda and Python. That's working all good.
Now I have moved on to Action on Google and I want to build the Skill on AWS lambda I used AWS API Gateway created a post call and gave the URL in the Fulfillment Tab of Action on Google. Now I wanted to check if the connection has been established between AWS lambda and Action on Google. I am pretty confused about how to check this?
Second Thing I am wondering does Action on Google Support Python because there are no examples in python nor anyone in the community has used python.
On there Github Repo they have given the code in NodeJS I wanted to understand the working by performing the Simulation First But it's not helping much. Like I wanted to understand the flow from the basics like When I developed for Alexa I started with Authorization and then moved on to the Controller.
Are you locked in on using Python in your stack? If you're flexible and can switch to NodeJS, you can use Actions on Google's NodeJS client library to connect with AWS Lambda.
Docs here: https://developers.google.com/actions/reference/nodejsv2/overview#example_aws_lambda_api_gateway
Related
I'm on the internet for 3 days trying to find something which would help me to set up PostreSQL server on Google Cloud and my flutter app. There is absolutely nothing neither in documentation or anywhere on the internet for flutter app how to connect it, set it up, or even do authentication without firebase. I tried to get help in console support page and it directed me here which I understand is not the best question for SO but I got no other option.. So can anyone help me with it or is flutter only design to work with firebase or is google cloud not ready for flutter yet?
Since SQL Databases should never be accessed over the internet directly, it would be a good idea to have a web endpoint which exposes a limited API, sending http requests for the operations wanted. One way to approach this would be to have your Flutter app trigger Cloud Functions, which would then connect to Cloud SQL (the managed PostgreSQL service on GCP).
Here is documentation on how to connect Cloud Functions to Cloud SQL. Finally, here is an external blog post on how to use Flutter with Cloud Functions. Please note that we cannot guarantee the accuracy of external information, and it should serve as a reference to get you started.
I am an intern and currently doing automation on a software. What I need to do is to automate the process of creating, and starting an application in the cloud foundry using Rest API (rest-assured). I can't start an app because to start it I need to upload bits. I searched for weeks and still cannot find how to do it. I can only use V2 of the Cloud Foundry API, because when I create an app using the V3 API it doesn't show in the dashboard. I don't know why so we decided to just leave it and use V2 instead.
My question is: Is there any way to create, and deploy/start an app using only Rest API with the V2 API of the Cloud Foundry API? If there is a way to do this by using V3 API, I'm willing to search for the solution to solve the issue.
Thank you very much.
I think others commented on your question to use the cf command line, but if you want to use whats behind the scenes of cf then you can refer to the REST API Docs. At the top the page there is a API version selector (I linked 2.9.0 since you mentioned V2).
More specifically, to create an app you can use the Create App Endpoint; then to upload the bits, you can use Upload Bits Endpoint.
I hope this helps. Definitely check out the main docs, there are tons of endpoints that might be useful. Good luck on your internship!
EDIT: Just in case you need the CF API URL its https://api.ng.bluemix.net
If you want to see how the cli uses the rest api behind the scenes you can temporarily set the environment variable BLUEMIX_TRACE with the command line EXPORT BLUEMIX_TRACE=TRUE.
Then you can try doing cf push or cf start, cf stop, etc, and you will see the http requests the CLI uses. This has helped me a lot versus digging through the CF documentation :). Hope it helps!
I solved it by combining V3 API with V2 API. I understand the upload bits in V3 API. I use link[1] mainly to start the app. I think you can't create a route in V3 API because I don't see any endpoint for that, so I use V2 API instead to create and assign. I also use V2 API to create my app because of what I stated in my question. Thanks for answering the question because without the answers given by good people here I can find the best way to solve this.
[1] Create an App using V3 : https://github.com/cloudfoundry/cloud_controller_ng/wiki/How-to-Create-an-App-Using-V3-of-the-CC-API
[2] V2 API doc : https://apidocs.cloudfoundry.org/3.1.0/routes/list_all_apps_for_the_route.html .
[3] V3 API doc : http://v3-apidocs.cloudfoundry.org/version/release-candidate/index.html#get-assigned-isolation-segment
I'm making an application for google home as graduation work from my university of systems analysis and development. My difficulty is integrating with an external server.
I'm doing the code in javaScript, node.js and my intents call data from an external service, the server of the company, there is authentication, but even with Json without validation I can not access the data.
I used some examples of the google channel itself in youtube and the documentation of DialogFlow but I can not.
Has someone already done something similar and can you help me?
I am interested in learning to use the Smartsheets API. In the past I created workflows in Google Apps Script, which has a built in IDE that houses the script. Does Smartsheets have something similar? If not, where is a common place to keep your code and have it react to webhooks/events?
Regards,
Shawn
The API is just a way to communicate between your application and Smartsheet - there is no hosting for your executable code. Smartsheet provides a number of SDKs to help make the calls easier to perform, but in theory you could use any language to make the REST commands. So, pretty much any service that allows executable code would work, such as Amazon AWS, Google Cloud, Microsoft Azure, or others. Here's a brief comparison of services.
You can start developing on your own computer before you worry about cloud deployment. See the getting started guide and samples here: https://github.com/smartsheet-platform/getting-started
If you really need to respond to webhooks, your code will have to run somewhere that accepts incoming HTTP calls from the Internet without being blocked by a firewall. This could be in your data center, any of cloud services, or via a tunnel such as https://ngrok.com/
I have some experience building chat and voice agents for other platforms, but I’m not using API.AI to understand natural language and parse intents. Do I have to replace my existing solution with API.AI?
Not at all. The advantages of using API.AI in creating a Conversation Action include Natural Language Understanding and grammar expansion, form filling, intent matching, and more.
That said, the Actions on Google platform includes a CLI, client library, and Web Simulator, all of which can be used to develop an Action entirely independent of API.AI. To do this you’ll need to build your own Action Package, which describes your Action and expected user grammars, and an endpoint to serve Assistant’s requests and provide responses to your users queries. The CLI can be used to deploy your Action Package directly to Google, and you can host your endpoint on any hosting service you wish. Google recommends App Engine on Google Cloud Platform.
I found this explanation from the official page most helpful.
API.AI
Use this option for most use cases. Understanding and parsing natural, human language is a very hard task, and API.AI does all that for you. API.AI also wraps the functionality of the Actions SDK into an easy-to-use web IDE that has conveniences such as generating and deploys action packages for you.
It also lets you build conversational experiences once and deploy to many other platforms other than Actions on Google.
ACTIONS SDK
Use this option if you have simple actions that have very short conversations with limited user input variability. These type of actions typically don't require robust language understanding and typically accomplish one quick use case.
In addition, if you already have an NLU that you want to use and just want to receive raw text and pass it to your own NLU, you will also need to use the Actions SDK.
Finally, the Actions SDK doesn't provide modern conveniences of an IDE, so you have to manually create action packages with a text editor and deploy them to your Google Developer project with a command-line utility.
Google is pushing aggressively everybody to API.AI. The only SDK they have (Node.js) no longer supports expected events for instance. Of course, you don't need to rely on their SDK (you can talk to the API directly) but they may change the API too. So proceed with caution.