Unable to recieve data from device in ibmiotin - ibm-cloud

I'm using my mobile phone as an IoT device and it does appears in the "Browse Devices" section in IBM Watson IoT platform. The events of this device are also getting recorded like below,
{
"d": {
"id": "iotdemodev",
"ts": 1572278167346,
"lat": 12.921498,
"lng": 80.1854588,
"ax": -0.01,
"ay": -0.03,
"az": 0,
"oa": 0,
"ob": 0,
"og": 0
}
}
Now in Node-RED I have used like below , the deployment has been successful however there is no message display in the debug.

It looks like there is some mixed information on what you are saying.
Try the following:
1) Use this to simulate and IoT Device
https://quickstart.internetofthings.ibmcloud.com/iotsensor/
2) After that, grab your device id from top right (e.g. fde7a936a947)
3) Go to https://quickstart.internetofthings.ibmcloud.com and add device id and press Go button: fde7a936a947
4) You now should see data coming into quickstart
5) Now in NodeRed (as per above screens) use the device id from 2) as fde7a936a947 into Device Id field
6) Data should show up now.
Based on your post "
I'm using my mobile phone as an IoT device and it does appears in the "Browse Devices" section in IBM Watson IoT platform...." I would say that you are actually using a registered device and in such case using quickstart you won't receive the events. You need to use registered device way. As such, for the Authentication drop-down, select "API Key". This would make visible the "API Key" drop-down. Select the "Add new ibmiot..." from the drop down and add the API Key and api-token that you can generate from the Watson IoT Platform.

Related

Turning on facebook messenger camera through Dialogflow chatbot

I created a chatbot on Dialogflow which informs the user about the names of the members of my (extended) family and about where they are the living. I have created a small database with MySQL which has these data stored and I fetch them with a PHP script whenever this is appropriate depending on the interaction of the user with the chatbot.
I have integrated this chatbot to Facebook Messenger. My question is the following:
Can I directly trigger the Facebook messenger camera to be turned on through Dialogflow (and without using any other front-end camera)?
The reason I want to turn on the camera is to allow the user to take a photo of himself/herself and then I will process the photo with some computer vision libraries to infer if this person at the photo is a family member of mine. Obviously, I can simply create another basic front-end where I will turn on a camera whenever e.g. an intent called 'Camera' is triggered but I was wondering if can do this directly on facebook messenger.
The json response that I am receiving at my back-end from Dialogflow contains only the following UI capabilities:
"surface": {
"capabilities": [
{
"name": "actions.capability.MEDIA_RESPONSE_AUDIO"
},
{
"name": "actions.capability.SCREEN_OUTPUT"
},
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.WEB_BROWSER"
}
]
}
therefore my first impression is that turning on a camera directly through Dialoglfow (and Facebook Messenger) is not possible.
Am I right?
Well, you're right... but probably not for the reasons you think.
First, you're not going to be able to turn on the Facebook Messenger camera because you're not using Facebook Messenger. You're using the Google Assistant. Right now, the Google Assistant doesn't define a way to send an image to the Action you're interacting with. (It does work with Google Lens, but at this point, there is no way to get that to you.)
Second, you wouldn't actually "turn on" the camera. If the user sends you an image through the Messenger camera, you can process this by looking at the originalRequest field in the JSON you get on your fulfillment which should contain the message from Facebook which contains the image.

How to wrap an existing chatbot for Google Assistant (Google Home)

We have a chatbot for our website today, that is not build using Google technology. The bot has a JSON REST API where you can send the question to and which replies with the corresponding answers. So all the intents and entities are being resolved by the existing chatbot.
What is the best way to wrap this functionality in Google Assistant / for Google Home?
To me it seems I need to extract the "original" question from the JSON that is send to our webservice (when I enable fullfilment).
But since context is used to exchange "state" I have to find a way to exchange the context between the dialogflow and our own chatbot (see above).
But maybe there are other ways ? Can it (invoke our chatbot) be done directly (without DialogFlow as man in the middle) ?
This is one of the those responses that may not be enough for someone who doesn't know what I am talking about and too much for someone who does. Here goes:
It sounds to me as if you need to build an Action with the Actions SDK rather than with Dialog flow. Then you implement a text "intent" in your Action - i.e. one that runs every time the user speaks something. In that text intent you ask the AoG platform for the text - see getRawInput(). Now you do two things. One, you take that raw input and pass it to your bot. Two, you return a promise to tell AoG that you are working on a reply but you don't have it yet. Once the promise is fulfilled - i.e. when your bot replies - you reply with the text you got from your bot.
I have a sample Action called the French Parrot here https://github.com/unclewill/french_parrot. As far as speech goes it simply speaks back whatever it hears as a parrot would. It also goes to a translation service to translate the text and return the (loose) French equivalent.
Your mission, should you choose to accept it, is to take the sample, rip out the code that goes to the translation service and insert the code that goes to your bot. :-)
Two things I should mention. One, it is not "idiomatic" Node or JavaScript you'll find in my sample. What can I say - I think the rest of the world is confused. Really. Two, I have a minimal sample of about 50 lines that eschews the translation here https://github.com/unclewill/parrot. Another option is to use that as a base and add code to call your bot and the Promise-y code to wait on it to it.
If you go the latter route remove the trigger phrases from the action package (action.json).
So you already have a Backend that process user inputs and sends responses back and you want to use it to process a new input flow (coming from Google Assistant)?
That actually my case, I've a service as a Facebook Messenger ChatBot and recently started developing a Google Home Action for it.
It's quite simple. You just need to:
Create an action here https://console.actions.google.com
Download GActions-Cli from here https://developers.google.com/actions/tools/gactions-cli
Create a JSON file action.[fr/en/de/it].json (choose a language). The file is your mean to define your intents and the URL to your webhook (a middleware between your backend and google assistant). It may look like this:
{
"locale": "en",
"actions": [
{
"name": "MAIN",
"description": "Default Welcome Intent",
"fulfillment": {
"conversationName": "app name"
},
"intent": {
"name": "actions.intent.MAIN",
"trigger": {
"queryPatterns": [
"Talk to app name"
]
}
}
}
],
"conversations": {
"app name": {
"name": "app name",
"url": "https://your_nodejs_middleware.com/"
}
}
}
Upload the JSON file using gactions update --action_package action.en.json --project PROJECT_ID
AFAIK, there only a Node.js client library for Actions-on-google https://github.com/actions-on-google/actions-on-google-nodejs that why you need a Node.js middleware before hitting your backend
Now, user inputs will be sent to your Node.js middleware (app.js) hosted at https://your_nodejs_middleware.com/ which may look like:
//require express and all required staff to build a Node.js server,
//look on internet how to build a simple web server in Node.js
//if you a new to this domain. const {
ActionsSdkApp } = require('actions-on-google');
app.post('/', (req, res) => {
req.body = JSON.parse(req.body);
const app = new ActionsSdkApp({
request: req,
response: res
});
// Create functions to handle requests here
function mainIntent(app) {
let inputPrompt = app.buildInputPrompt(false,
'Hey! Welcome to app name!');
app.ask(inputPrompt);
}
function respond(app) {
let userInput = app.getRawInput();
//HERE you get what user typed/said to Google Assistant.
//NOW you can send the input to your BACKEND, process it, get the response_from_your_backend and send it back
app.ask(response_from_your_backend);
}
let actionMap = new Map();
actionMap.set('actions.intent.MAIN', mainIntent);
actionMap.set('actions.intent.TEXT', respond);
app.handleRequest(actionMap); });
Hope that helped!
Thanks for all the help, the main parts of the solution are already given, but I summarize them here
action.json that passes on everything to fullfilment service
man in the middle (in my case IBM Cloud Function) to map JSON between services
Share context/state through the conversationToken property
You can find the demo here: Hey Google talk to Watson

Still cannot find google action using Action SDK on Google Home App

I am developing a Google Smart Home App, and I follow the official development document.
Create my project in Google Console
'gactions update --action_project action.json --project {myproject}'
Complete the necessary information,including App Information, Account Linking;
'gactions test --action_project action.json --project {myproject}'
I have tried many times, even I used the other accout, and created it. But the result was same that my smart home app still not appear in my google home.
Someone said they created their app, and it would appear in Home Control section like [test]{project_name} after 'gactions test'. But it not work for me. It is very frustrated. And This step has blocked my further work for many days.
Further more, I want to confirm the following question:
When I created my smart home app, the section is like this:
Actions added from Actions SDK
Actions: (this place is emtpy, is it normal? or lack of something like agent or else)
The Smart Home App in test mode, it is required that voice-only conversations with the Google Home speaker, or directly appear in Google Home App after 'gactions test'?
Any help will be appreciated!
My action.json: { "actions": [{ "name": "actions.devices", "deviceControl": { }, "fulfillment": { "conversationName": "automation" } }], "conversations": { "automation" : { "name": "automation", "url": "https://xxxxx" } } }
running it the first time could be a bit painful. Below is what I do to get it working.
Make sure you have filled in all details on the action console.
Click on the Test button to start your app.
From the simulator, type Talk to . It might ask you to login.
If you can see the response from your app. Then it should work on your google home.
Please note that your test app will expire if you leave it idle for too long. You need to click on the test button to run it again or your google home won't be able to lunch your app.
Also, in your action.json, if you are pointing your intend to a webhook, please make sure your webhook is accessible from google.

IBM IoT Real-Time Insights and multiple map events

I am using IBM IoT Real-Time insights and sending in an event which contains:
{
"d": {
"speed": 105,
"lat": 34.147543,
"lng": -99.300058
}
}
When I send in an event, the map correctly updates with the correct marker as expected:
However, when I send in a second event without changing any of its content, the map changes and shows the following marker position:
My gut is telling me that the second event is providing coordinates of 0, 0 ... but it is exactly the same event as the first. If I select a new dashboard and go back to the first with the map, this pattern repeats. The next event shows fine and the one after that shows me West Africa. I am able to trivially recreate.
The resetting of latitude and longitude coordinates with the maps widget is a limitation that is currently being addressed by the IoT Real-Time Insights team. Keep an eye on the What's new in Bluemix section for news about new features as they are released.

How to list a Chrome App in the Store's Offline Enabled Category?

How to take a Chrome packaged app that works offline and have it listed in the Chrome Web Store's Offline Enabled Collection?
While most app listing attributes are controlled from the Chrome Web Store Developer Dashboard, the bit to flip to indicate that it works offline is done in the app manifest file! Set offline_enabled to true:
manifest.json:
{
"name": "My App",
// ...
"offline_enabled": true,
// ...
}