Unable to retrieve date field from mongodb - mongodb

The date field when save to mongodb is in the format of:
{ "_id" : ObjectId("4f03283e1d4ee82215000002"), "name" : "nano3",
"category_id" : ObjectId("4f022b411d4ee8105700001c"), "price" : 20,
"production_date(3i)" : "1", "production_date(2i)" : "1",
"production_date(1i)" : "2011", "description" : "a music player with
video play function" }
when I try to get the date using #product.production_date from my model it failed. I am using Mongoid mapper

It fails because you don't have any fields named "production_date".
What you do have is fields named "production_date(3i)", "production_date(2i)", and "production_date(1i)".
You should be saving instances of the time class which can be properly serialized by the ruby driver.
Time.now or Time.utc(2011,1,1) will probably do what you want.

Related

How can I convert geopoint data of mongoDB to Elasticsearch 7.x in this case?

I want to convert geo data of mongodb to elasticsearch 7.x in realtime.
I just know that should to use logstash-input-mongodb plugin of logstash.
Please, let me know how can I make logstash.conf refer to below.
I have a mongodb data like :
(some fields were encrypted. using GibberishAES.size(256) with custom string key.)
{
"id" : "john",
"age" : 26,
"geo" : "Cdzv5OoMXFw89do5NUorGkiRzAtnIpIw66kg=", // "57.233, 129.11"
"address" : "I6LoxOQPRPF7h4SLQo2g=" // "Rovert Hall"
}
To elasticsearch data like :
{
"id" : "john",
"age" : 26,
"geo" : "drm3bt", // geohash data of "57.233, 129.11"
"address" : "Rovert Hall"
}
I solved it as made mongodb oplog sender.
read mongodb oplog with encoding data and then send it with decoding to logstash.
if someone find solution, you can find it in below :
Here : https://github.com/gnokoheat/oplog

Adding to a double-nested array in MongoDB

I have a double nested array in my MongoDB schema and I'm trying to add an entirely new array element to a second-level nested array using $push. I'm getting the error cannot use the part (...) to traverse the element
A documents have the following structure
{
"_id" : ObjectId("5d8e37eb46c064790a28a467"),
"org-name" : "Manchester University NHS Foundation Trust",
"domain" : "mft.nhs.uk",
"subdomains" : [ {
"name" : "careers.mft.nhs.uk",
"firstSeen" : "2017-10-06 11:32:00",
"history" : [
{
"a_rr" : "80.244.185.184",
"timestamp" : ISODate("2019-09-27T17:24:57.148Z"),
"asn" : 61323,
"asn_org" : "Ukfast.net Limited",
"city" : null,
"country" : "United Kingdom",
"shodan" : {
"ports" : [
{
"port" : 443,
"versions" : [
"TLSv1",
"-SSLv2",
"-SSLv3",
"TLSv1.1",
"TLSv1.2",
"-TLSv1.3"
],
"cpe" : "cpe:/a:apache:http_server:2.4.18",
"product" : "Apache httpd"
}
],
"timestamp" : ISODate("2019-09-27T17:24:58.538Z")
}
}
]
}
]
}
What I'm attempting to do is refresh the details held in the history array and add another entire array entry to represent the most recently collected data for the subdomain.name
The net result is that I will have multiple entries in the history array, each one timestamped the the date that the data was refreshed. That way I have a historical record of changes to any of the data held.
I've read that I can't use $push on a double-nested array but the other advice about using arrayfilters all appear to be related to updating an entry in an array rather than simply appending an entirely new document - unless I'm missing something!
I'm using PyMongo and would simply like to build a new dictionary containing all of the data elements and simply append it to the history.
Thanks!
Straightforward in pymongo:
record = db.mycollection.find_one()
record['subdomains'][0]['history'].append({'another': 'record'})
db.mycollection.replace_one({'_id': record['_id']}, record)

mongodb time duration between between two dates

I want to find the time duration in the form of days:hours:minutes by passing two dates in mongodb for all document.
Example document present in mongodb:
{
"_id" : ObjectId("5b3b303f4a05d1673d9bfa31"),
"customerID" : "1",
"latitude" : "13.035770",
"longitude" : "77.597022",
"loc_update_time" : 1533021761818.0,
"loc" : [
77.597022,
13.03577
],
"datetime" : ISODate("2018-08-01T09:43:25.729Z")
}
one date is present in douctment and another date i will pass in query.
please help me
Thank you

How to read from List and Display in Jaspersoft iReport Designer 5.1 using MongoDB as a field

I have a MondoDB Collection 'quotes' with lineItems 'list'.. as shown below...
db.quotes.find();
> { "_id" : "51d31c4a0364a1b7f7cf45f7",
"accountName" : "NewAccountName",
"className" : "com.db.model.Quote",
"cost" : "0",
"lineItems" : [ { "lineNo" : 1, "product" : { "sku" : "MW216", "description" : "JBoss EAP", "cost" : "1043.5" }, "quotePrice" : "1230", "quantity" : 4 },
{ "lineNo" : 2, "product" : { "sku" : "MW217", "description" : "JBoss EDS, "cost" : "15178.18"}, "quotePrice" : "0", "quantity" : 3} ],
"quoteNumber" : "22005",
"shipping" : "GROUND"
}
I am using the MongoDB Query as shown..
{
'collectionName' : 'quotes',
'findQuery' : {
$where : "this.quoteNumber == $P!{QuoteNo}"
}}
I would like to Display each lineItem as one row.
lineNo | sku | Description
1 | MW216 | JBoss EAP
2 | MW217 | JBoss EDS
How to design this in the with JasperReports using iReport Designer?
Currently when using the 'lineItems' field from the Report Inspector and placing that in the Report shows every thing that is there in the List as one object. I am trying to read each field in the list and display it in the report grouping by lineItems as shown above.
Any help or clues will be appreciated and Thanks for your time in helping me out.
First you should absolutely not be using $where - instead of $where : "this.quoteNumber == $P!{QuoteNo}" you should simply use db.quotes.find({quoteNumber:YOURQUOTENO}) this will execute a normal MongoDB query on the server, rather than spawning a Javascript shell to execute $where statement.
If you want to see each line item separate you want to use aggregation framework $unwind like this:
db.quotes.aggregate({$unwind:"$lineItems"})
This will return one document per each lineItem so if you have five documents with each of them having three lineItems in the array you would get back 15 documents.

MongoDB : query result size greater than collection size

I'm analyzing a MongoDB data source to check its quality.
I'm wondering if every document contains the attribute time: so I used this two command
> db.droppay.find().count();
291822
> db.droppay.find({time: {$exists : true}}).count()
293525
How can I have more elements with a given field than the elements contained in whole collection ? What's going wrong ? I'm unable to find the mistake.
If it's necessary I can post you the expected structure of the document.
Mongo Shell version is 1.8.3. Mongo Db version is 1.8.3.
Thanks in advance
This is the expected structure of the document entry:
{
"_id" : ObjectId("4e6729cc96babe974c710611"),
"action" : "send",
"event" : "sent",
"job_id" : "50a1b7ac-7482-4ad6-ba7d-853249d6a123",
"result_code" : "0",
"sender" : "",
"service" : "webcontents",
"service_name" : "webcontents",
"tariff" : "0",
"time" : "2011-09-07 10:22:35",
"timestamp" : "1315383755",
"trace_id" : "372",
"ts" : "2011-09-07 09:28:42"
}
My guess is that is an issue with the index. I bet that droppay has an index on :time, and some unsafe operation updated the underlying collection without updating the index.
Can you try repairing the db, and see if that makes it better.
Good luck.
There are probably time values that are of type array.
You may do db.droppay.find({time: {$type : 4}}) to find such documents.