I've created devices and managed them with Node-RED. And receiving real-time data in my IoT platform board. So i wanted to access those data by REST API which i found this: https://docs.internetofthings.ibmcloud.com/apis/swagger/v0002-beta/info-mgmt-beta.html
Specially wanted to use this
/device/types/{typeId}/devices/{deviceId}/state/{applicationInterfaceId}
And i created schema by following this: https://developer.ibm.com/courses/labs/create-device-schema-internet-things-platform-service-ibm-bluemix-dwc013/
Then i requested to /api/v0002/applicationinterfaces, /api/v0002/schemas.
But both result was:
{"results": [], "meta": {"total_rows": 0}}
How can i create schema and application interface?
The /api/v0002/applicationinterfaces, /api/v0002/schemas will return 0 results because you need to use the POST methods described in https://docs.internetofthings.ibmcloud.com/apis/swagger/v0002-beta/info-mgmt-beta.html to create the schema and application interface.
However, from your question I think you simply want to access the device data. In which case view the documentation for developing applications https://console.bluemix.net/docs/services/IoT/applications/api.html#api, in particular you can access the last event for a particular device using the last event cache e.g. /api/v0002/device/types/{deviceType}/devices/{deviceId}/events
Finally I found the correct REST here which returns event data i.e. sensor data.
On this link refer Last Event Cache. The REST will return the "payload" field with encrypted value. Just decrypt that value and you will see the sensor data.
Thanks,
Rahul
Related
I am once again asking for your help.
Let me tell you my current situation first.
I have a device that connects to the "Cloud IoT core" and sends data using mqtt.
The data then goes to the Pub/Sub topic.
Then a "Cloud function" gets triggered which stores the data inside "Firestore"
Another "Cloud function" gets triggered which sends me an email with the stored data inside Firestore.
The size of the data is about 1 Kilobyte and I expect to send about 10K messages per Month
I need that data to create a dashboard for which I am using "Google Data Studio"
To get my data inside there I installed the Firebase extension "Stream Collections to BigQuery" to send the data to "BigQuery". from there I just had to click a few buttons to automaticly stream data from BigQuery to "Google Data Studio"
Everything works so far but as you can see I store the data 4 times. once via email, once inside firestore, once inside BigQuery and Data studio. All of this is going to cost alot of money in the long term, because the data stored doubles every Month.
What I need from You guys is some advice on best practices.
Is there a way to store the data directly inside BigQuery when it arrives in the Pub/Sub?
If so can I also send an email with the data as an attachment?
Is BigQuery a good solution or should I use "Cloud SQL"?
To save data inside Firestore I can execute the following inside a cloud function. Is there a similare way for BigQuery?
firestore.collection("put Collection name here").doc(put document name here).set({
'name' : name
'age' : age
}).then((writeResult) => {
//console.log('Successfully executed set');
return;
}).catch((err) => {
console.log(err);
return;
});
Is there a way to store the data directly inside BigQuery when it
arrives in the Pub/Sub?
Yes, you can use Dataflow to build a streaming pipeline, as explained in different documentation items or blogs:
GCP Doc: Pub/Sub Topic to BigQuery
A Dataflow Journey: from PubSub to BigQuery
Write a Pub/Sub Stream to BigQuery
But you could also use the Node.js Client for BigQuery in a Cloud Function, triggered by Pub/Sub. However, one could consider that this doesn't "store the data directly"...
If so can I also send an email with the data as an attachment?
If you use a Cloud Function, that's quite easy, for example by using the dedicated "Trigger Email" Firebase Extension.
You can also directly send an email from a Cloud Function by using the nodemailer package, see this official Cloud Function sample.
Is BigQuery a good solution or should I use "Cloud SQL"?
It all depends on you exact use case... There is a lot of literature on the net: https://www.google.com/search?client=firefox-b-d&q=difference+between+Cloud+SQL+and+BigQuery
However, since you are going to use Data Studio, a classical answer would be to use BigQuery since it is best suited for analytics. But again, it depends on you exact use case.
(Note that this question alone would probably be closed on SO because it is opinion based).
To save data inside Firestore I can execute the following inside a
cloud function. Is there a similar way for BigQuery?
Yes, as said above, use the Node.js Client for BigQuery in your Cloud Function.
My first post here and I'm new to Data Fusion and I'm with low to no coding skills.
I want to get data from ZohoCRM to BigQuery. Module from ZohoCRM (e.g. accounts, contacts...) to be a separate table in BigQuery.
To connect to Zoho CRM I obtained a code, token, refresh token and everything needed as described here https://www.zoho.com/crm/developer/docs/api/v2/get-records.html. Then I ran a successful get records request as described here via Postman and it returned the records from Zoho CRM Accounts module as JSON file.
I thought it will be all fine and set the parameters in Data Fusion
DataFusion_settings_1 and DataFusion_settings_2 it validated fine. Then I previewed and ran the pipeline without deploying it. It failed with the following info from the logs logs_screenshot. I tried to manually enter a few fields in the schema when the format was JSON. I tried changing the format to csv, nether worked. I tried switching the Verify HTTPS Trust Certificates on and off. It did not help.
I'd be really thankful for some help. Thanks.
Update, 2020-12-03
I got in touch with Google Cloud Account Manager, who then took my question to their engineers and here is the info
The HTTP plugin can be used to "fetch Atom or RSS feeds regularly, or to fetch the status of an external system" it does not seems to be designed for APIs
At the moment a more suitable tool for data collected via APIs is Dataflow https://cloud.google.com/dataflow
"Google Cloud Dataflow is used as the primary ETL mechanism, extracting the data from the API Endpoints specified by the customer, which is then transformed into the required format and pushed into BigQuery, Cloud Storage and Pub/Sub."
https://www.onixnet.com/insights/gcp-101-an-introduction-to-google-cloud-platform
So in the next weeks I'll be looking at Data Flow.
Can you please attach the complete logs of the preview run? Make sure to redact any PII data. Also what is the version of CDF you are using? Is CDF instance private or public?
Thanks and Regards,
Sagar
Did you end up using Dataflow?
I am also experiencing the same issue with the HTTP plugin, but my temporary way to go around it was to use a cloud scheduler to periodically trigger a cloud function that fetches my data from the API and exports them as a JSON to GCS, which can then be accessed by Data Fusion.
My solution is of course non-ideal, so I am still looking for a way to use the Data Fusion HTTP plugin. I was able to make it work to get sample data from public API end-points, but for a reason still unknown to me I can't get it to work for my actual API.
Is it possible to get stream related information in data load editor in Qlik Sense App? I cannot create new apps. I can access other information like app id, app path etc.
The stream that an application is published in, to the best if my knowledge, is not available by a System function within the load script. You can certainly pull that information from the Repository API inline. The obvious candidate for doing so would be the monitor_apps_REST_app data connection which should be present on your system. You should be able to pass the combination of DocumentName() as a variablized WHERE clause into the REST connection pull and then store the stream name or ID.
What is the ultimate use case?
Is there a way to get all activities from the Google fitness store via the REST API?
My current assumption is that other apps store their activities in sessions and I can retrieve them using Users.sessions.list. However, the information there, does not really include all the information that was stored or I would like to see: when I manually add a short run via the Fit Android app, I expect this information to be somehow accessible via the sessions API. This should at least include the information I have provided, such as distance or time.
Looking at the same information via the app or the web interface, I can see all the details I have previously entered plus the approximate number of steps and calories.
How do I get this information via the API?
I am currently mainly interested in activities of type running or jogging (8, 56-58) and would like to read the distance in addition to the time information already provided in the session.
Not sure, if this is the right way, but I get all the information I need, if I follow these steps
Find the correct session via Users.sessions.list
Query all data via Users.datasets.aggregate:
Set startTimeMillis and endTimeMillis to the values from the session in question
Set bucketBySession to group results by sessions.
I explicitly query all data sources: For every data source id I add a { "dataSourceId": <id>}to theaggregateBy` array. Not sure, if this is necessary
The resulting bucket has all information related to the session. For my use case I need to clean up overloaded data: some data sources return the distance as steps (derived) while I need the physical length in meters.
This seems to work for my Fit data with the additional cleaning, but I will need to check, if this works for other user's data too.
I have been using Flurry.com to capture my analytical data for my iPhone app. I send them custom event information about what is going on in my application (registration/login/etc). I pass extra information with these events. Now I want to access this information and analyze it. How do I do that?
On their website I can see small 'pages' of information captured from my app. I can even 'export to CSV' a small 'page' of this data. But I do not see a way to export all of the data for a given period of time. Am I missing something?
I found api.flurry.com RESTful API today, but again it looks like I can only make two different calls that seem kind of useless (AppMetrics/AppInfo) and only return information for canned metrics. I really want to get at the custom events and custom event data that I sent to them. Is there a way to do this?
Thanks for any help.
There now appears to be EventMetrics API call, it allows you to request information about your Events.
I received the following response from Flurry:
I apologize for the inconvenience. We will eventually be expanding Flurry's API functionality to include events data. But until that occurs you should be able to access your event's data via Flurry's CSV files.
It looks like my data is stuck inside of Flurry.com right now. I think I better re-think my analytics strategy. I need my data out of Flurry.com and into my own data warehouse!
Update:
Flurry has now implemented its events data API. However, if you want to do custom analytics on the custom data that you send, you will probably be disappointed. The output of a call to the events data API is a summary, not your original logs.