We're experimenting with storing data to a MongoDB by using node-red. As it is now, we can store data on the database, but it seems like only the 'msg.payload' is stored (as document) - and not the whole msg object. Which confuses us a little...
The flow is very simple and nothing much has really been done.
We actually dont need ALL data, but we wish to store payload but also metadata as a document to our collection on our database. We've tried searching for an answer to this, but couldn't find anything relevant on how to do this. Hopefully we can get some help on this forum.
Thanks in advance! (btw. we're using mongodb3 on node-red to store data)
The node you are using is working as intended.
The normal pattern for Node-RED is that the focus of any given node is the msg.payload entry, any other msg properties are considered to be meta data.
The simplest thing here would be to use the built in core change node to move the other fields you are interested in to be properties of the msg.payload object.
Related
With the Copy Data transformation it is possible to retrieve data from a REST call (array with flat json objects, similar to Odata) and copy the contents to a flat table keeping the data types from the source but without the necessity to set the schema for that specific data.
When I try to recreate this with Data Flow, I can't get this to work. When I check the Data Preview of my Source I get a hierarchy with a body (with my odata like data) and a header. And if I send that to my sink (Avro) it will be saved in this same hierarchical structure (including the header). I know I can fix this manually by using a Select operation (body.column1, body.column2, etc.), but I want to make my Data Flow dynamic so I'm able to use it with multiple tables/endpoints.
So I receive it like this with my REST source:
link
And I want it to be like this at my Sink without hardcoding my schema:
link
The only work around I can come up with is retrieving the data using Copy Data, put it somewhere temporarily and then use my data flow to further transform the data. Is there a more easy way to do this? I cannot imagine that I'm the only one that has this issue.
Hopefully it's clear and somebody is able to help. Thank you very much in advance.
Data flow projection will get schema from API including body and header. Hence, when you use auto mapping everything going to be saved.
Below work arounds you can think of,
As you mentioned, using copy data first and then data flow to further transform.
Use select or derived column transformations and transform your data to get all column names and then finally use sink. For this you can opt with Column pattern matching syntax. So that one condition can be meet with multiple columns to transform.
Check below link to know about column pattern mappings.
https://learn.microsoft.com/en-us/azure/data-factory/concepts-data-flow-column-pattern
I'm pretty confused concerning this hip thing called NoSQL, especially CloudantDB by Bluemix. As you know, this DB doesn't store the values chronologically. It's the programmer's task to sort the entries in case he wants the data to.. well.. be sorted.
What I try to achive is to simply get the last let's say 100 values a sensor has sent to Watson IoT (which saves everything in the connected CloudantDB) in an ORDERED way. In the end it would be nice to show them in a D3.css style kind of graph but that's another task. I first need the values in an ordered array.
What I tried so far: I used curl to get the data via PHP from https://averylongID-bluemix.cloudant.com/iotp_orgID_iotdb_2018-01-25/_all_docs?limit=20&include_docs=true';
What I get is an unsorted array of 20 row entries with random timestamps. The last 20 entries in the DB. But not in terms of timestamps.
My question is now: Do you know of a way to get the "last" 20 entries? Sorted by timestamp? I did a POST request with a JSON string where I wanted the data to be sorted by the timestamp, but that doesn't work, maybe because of the ISO timestamp string.
Do I really have to write a javascript or PHP script to get ALL the database entries and then look for the 20 or 100 last entries by parsing the timestamp, sorting the array again and then get the (now really) last entries? I can't believe that.
Many thanks in advance!
I finally found out how to get the data in a nice ordered way. The key is to use the _design api together with the _view api.
So a curl request with the following URL / attributes and a query string did the job:
https://alphanumerical_something-bluemix.cloudant.com/iotp_orgID_iotdb_2018-01-25/_design/iotp/_view/by-date?limit=120&q=name:%27timestamp%27
The curl result gets me the first (in terms of time) 120 entries. I just have to find out how to get the last entries, but that's already a pretty good result. I can now pass the data on to a nice JS chart and display it.
One option may be to include the timestamp as part of the ID. The _all_docs query returns documents in order by id.
If that approach does not work for you, you could look at creating a secondary index based on the timestamp field. One type of index is Cloudant Query:
https://console.bluemix.net/docs/services/Cloudant/api/cloudant_query.html#query
Cloudant query allows you to specify a sort argument:
https://console.bluemix.net/docs/services/Cloudant/api/cloudant_query.html#sort-syntax
Another approach that may be useful for you is the _changes api:
https://console.bluemix.net/docs/services/Cloudant/api/database.html#get-changes
The changes API allows you to receive a continuous feed of changes in your database. You could feed these changes into a D3 chart for example.
I am replicating a collection (I only have access to mongo shell on the server). In the current collection all documents have a field called jsonURL. The value of this field is a url http://www.something.com/api/abc.json. I want to copy each document from oldCollection to newCollection, but I want also want to fetch data from that url and add that to each new document created.
I last time heard that XMLHTTPRequest was on mongo's list, but as a low priority feature (I can understand why). And as I found nothing in the documentation, I am guessing its still in the queue. I am hoping I can get something in forEach(function(eachDoc){});
Do I have any other way of achieving this. Thanks.
I have an instance of Raven Db at localhost:8081. I made sure to change raven's config file to allow anonymous access. I created a database named AT. Inside AT I have a collection named Admins. Inside of Admins I have two documents. I'm trying to retrieve some data via Rest using RestClient. I try to hit the db using:
http://localhost:8081/docs/admins/7cb95e9a (last bit is the id of the document I want).
and
http://localhost:8081/docs/at/admins/7cb95e9a.
With both I receive a 404. I'm not sure what I'm missing here. Can someone point me in the right direction?
The URL has the following format:
http://localhost:8081/databases/{{database-name}}/docs/{{document-id}}.
Collection is a virtual thing. get a document only by its ID, there no nothing on collection here. The document ID can be anything you set, but if you let RavenDB to generate it, it will probably be admins/1.
I'm looking for a recommendation on how best to implement MongoDB foreign key ObjectId fields. There seem to be two possible options, either containing the nested _id field or without.
Take a look at the fkUid field below.
{'_id':ObjectId('4ee12488f047051590000000'), 'fkUid':{'_id':ObjectId('4ee12488f047051590000001')} }
OR
{'_id':ObjectId('4ee12488f047051590000000'), 'fkUid':ObjectId('4ee12488f047051590000001')} }
Any recommendations would be much appreciated.
I'm having a hard time coming up with any possible advantages for putting an extra field "layer" in there, so I would personally just store the ObjectId directly in fkUid.
I suggest to use default dbref implementation, that is described here http://www.mongodb.org/display/DOCS/Database+References and is compatible with most of specific language drivers.
If your question is about the naming of the field (what you have in the title), usually the convention is to name it after the object to which it refers.
The both ways that you have mentioned are one of the same meaning. But they have different kind of usages.
Storing fkUid like 'fkUid':{'_id':ObjectId('4ee12488f047051590000001')} an object has it's own pros. Let me give an example, Suppose there is a website where users can post images and view images posted by other users as well. But when showing the image the website also shows the name/username of the user. By using this way you also can store the details like 'fkUid':{'_id':ObjectId('4ee12488f047051590000001'), username: 'SOME_X'}. When you are getting details from the db you don't have to send a request again to get the username for the specific _id.
Where as in the second way 'fkUid':ObjectId('4ee12488f047051590000001')} } you have to send another request to the server only for getting the name/username and nothing else is useful from the same object.