MongoDB update if newer or insert if not exists - mongodb

For example, if I have some sensors and they have unique SN.
Then, I have some history data (not sorted) in this format: (SN, timestamp, value)
I want to maintain sensors' latest status in MongoDB. {"sn":xxx, "timestamp": xxx, "value": xxx, "installed_time": xxx}. Installed times might be manually added later.
So currently, my code is like
if db.sensors.find_one({"sn":SN, "timestamp": {"$lt": timestamp}}):
db.sensors.update_one({"sn":SN}, {"$set":{"timestamp": timestamp, "value": value}}, {"upsert": True})
I'd like to know whether I can combine these to one operation.
I tried to do conditional upsert
db.sensors.update_one({"sn":SN, "timestamp": {"$lt": timestamp}}, {"$set":{"timestamp": timestamp, "value": value}}, {"upsert": True}). The problem is that I'll have multiple documents with same SN.
For example, let's start with empty collection. First, (1, 3, 1) is processed, {"sn": 1, "timestamp":3, "value": 1} is inserted. Then, processing (1, 1, 2), it will create another document {"sn": 1, "timestamp":1, "value": 2}. The intended behaviour is just ignore this data point.
I also tried to do document replacement db.sensors.update_one({"sn":SN, "timestamp": {"$lt": timestamp}}, {"sn": SN, "timestamp": timestamp, "value": value}, {"upsert": True}). This will overwrite some other fields like installed_time.

Related

How to model map of maps in mongoose to create a grouped-by-category items collection in MongoDB

In my angular app, I store items in groups(categories):
pendants, earrings, sets, etc (instead of a flat items array).
I would like the database structure to be grouped by category as well.
I use the following typescript data structure in Angular
items:Map<Category, Map<string, Item>>
So far I was able to create a JSON representation of it and load it to the google Realtime database. This has worked fine so far.
But now I would like to move to MongoDB which has mongoose Schemas
I'm not sure how to model this map of maps in Mongoose.
I understand I need to create a Schema of the single item.
But what about the items map? Do I need an Items Schema as well?
My collection in MongoDb is a collection of Item. But this is not a flat collection. It should be a "by-category" collection. How does MongoDB represent this structure and what is the appropriate Mongoose schema solution?
I've uploaded the items JSON to MongoDB via the mongoimport in the shell, the result in Mongo looks like this:
I think MongoDB sees my structure as a collection with a single huge, nested Document. This is not exactly what I want. I want the structure to look like on this image, but each item to be recognized as a Document in the same nested collection. And hopefully, to be able to query that structure accordingly, that is pull from the DB just the items from the pendants category, for example.
Also, should I go with typescript or javascript for mongoose?
I looked into typescript documentation and suspect this is not mature yet. I expected the mongoose schemas to actively use my typescript models and be based on them, but it still just creates schemas as hardcoded model copies. Kind of weird. Is there a robust typescript solution, which reuses frontend models and minimizes code duplication?
This is how the JSON representation of my items looks like
{
"pendants": {
"01001A": {
"catId": "01001A",
"name": "moon",
"category": "pendants",
"price": 90,
"inStockQuantity": 1,
"isOnSale": false,
"rating": 5,
"importance": 1,
"size": {
"diameter": 3.5,
"lanyardLength": 43
}
},
"01002A": {
"catId": "01002A",
"name": "fish",
"category": "pendants",
"price": 90,
"inStockQuantity": 1,
"isOnSale": false,
"rating": 5,
"importance": 1,
"size": {
"diameter": 2,
"lanyardLength": 43
}
}
},
"earrings": {
"02001A": {
"catId": "02001A",
"name": "winter",
"category": "earrings",
"price": 90,
"inStockQuantity": 1,
"isOnSale": false,
"rating": 5,
"importance": 1,
"size": {
"diameter": 1.3,
"chainLength": 43
}
},
"02002A": {
"catId": "02002A",
"name": "flower",
"category": "earrings",
"price": 160,
"inStockQuantity": 1,
"isOnSale": false,
"rating": 5,
"importance": 1,
"size": {
"height": 2.6,
"vavLength": 2
}
}
}
}

MongoDB upsert array document with golang

I have a document like below:
{
"_id": "1.0",
files: [
{"name": "file_1", "size": 1024, "create_ts": 1570862776426},
{"name": "file_2", "size": 2048, "create_ts": 1570862778426}
]
}
And I want to upsert “files” with "file_x":
1 if "file_x" already in "files", then update, for example "file_x" is:
{"name": "file_2", "size": 4096, "create_ts": 1570862779426}
after upsert document is:
{
"_id": "1.0",
files: [
{"name": "file_1", "size": 1024, "create_ts": 1570862776426},
{"name": "file_2", "size": 4096, "create_ts": 1570862779426}}
]
}
2 if "file_x" not in "files", insert it, for example "file_x" is:
{"name": "file_3", "size": 4096, "create_ts": 1570862779426}
after upsert document is :
{
"_id": "1.0",
files: [
{"name": "file_1", "size": 1024, "create_ts": 1570862776426},
{"name": "file_2", "size": 2048, "create_ts": 1570862778426},
{"name": "file_3", "size": 4096, "create_ts": 1570862779426}
]
}
So can I use one function to archive it ?
You will need to do this manually. There's no upsert mechanism for embedded structures inside a document.
First fetch the document, check if file_x is in the files list, if not, insert it. Then save the document back.
You should make sure that at any given time, only one program / goroutine is trying to do this, otherwise you will run into race conditions and file_x might get inserted multiple times.
There is not a single update operation in mongodb update language that will do what you want to do. You can get close by using $addToSet, which adds to a set of items if the item is not already there, but it will not update the item based on the match of a subset of fields. Your best option is to perform a read-update in memory-write.

How to get documents from MongoDB based on greater or less than the given date

I need to get the record from MongoDB based on the date using MongoDB. I am providing my collection below.
f_task:
{
"_id": "5a13731f9402cc17f81ade10",
"taskname": "task1",
"description": "description",
"timestamp": "2017-11-21 05:58:14",
"created_by": "subhra",
"taskid": "858fca9e2e153a61515c0372e079c521",
"created_date": "21-11-2017"
}
Here I need to fetch record as per created_date. Suppose user input is 20-11-2017 or 22-11-2017 then I need query to get the record if the given date is greater than or less than the "created_date" value.

MongoDb query - aggregation, group, filter, max

I am trying to figure out specific mongoDb query, so far unsuccessfully.
Documents in my collections looks someting like this (contain more attributes, which are irrelevant for this query):
[{
"_id": ObjectId("596e01b6f4f7cf137cb3d096"),
"code": "A",
"name": "name1",
"sys": {
"cts": ISODate("2017-07-18T12:40:22.772Z"),
}
},
{
"_id": ObjectId("596e01b6f4f7cf137cb3d097"),
"code": "A",
"name": "name2",
"sys": {
"cts": ISODate("2017-07-19T12:40:22.772Z"),
}
},
{
"_id": ObjectId("596e01b6f4f7cf137cb3d098"),
"code": "B",
"name": "name3",
"sys": {
"cts": ISODate("2017-07-16T12:40:22.772Z"),
}
},
{
"_id": ObjectId("596e01b6f4f7cf137cb3d099"),
"code": "B",
"name": "name3",
"sys": {
"cts": ISODate("2017-07-10T12:40:22.772Z"),
}
}]
What I need is to get current versions of documents, filtered by code or name, or both. Current version means that from two(or more) documents with same code, I want pick the one which has latest sys.cts date value.
So, result of this query executed with filter name="name3" would be the 3rd document from previous list. Result of query without any filter would be 2nd and 3rd document.
I have an idea how to construct this query with changed data model but I was hoping someone could lead me right way without doing so.
Thank you

CouchDB: query reduced value on complex key with timeframe

Application user can perform different tasks. Each kind of task has unique identifier. Each user activity is recorded to database.
So we have following Event entity to keep in database:
{
"user_id": 1,
"task_id": 2,
"event_dt": [
2013, 11, 15, 10, 0, 0, 0
]
}
I need to know how many tasks of each type were performed by particular user during particular timeframe. Timeframe might be quite long (i.e. rolling chart for last year is requested).
For better understanding, map function might be something like:
emit([doc.user_id, doc.task_id, doc.event_dt], 1)
and it might be queried using group_level=2 (or group_level=1 in case just number of user events is needed).
Is it possible to answer above question by making single view query using map/reduce mechanism? Do I have to use list functionality (though it may cause performance issues)?
Just use flat key [doc.user_id, doc.task_id].concat(doc.event_dt) since it will simplify request and grouping logic:
with group_level=1: you'll get amount of tasks per user for all time
with group_level=2: amount of specific task ids per user for all time
with group_level=3: same as above but in context of specific year
with group_level=4: same as above but also grouped by months
etc. by days, hours, minutes and seconds
For instance, the result for group_level=3 may be:
{"rows":[
{"key": ["user1", "task1", 2012], "value": 3},
{"key": ["user1", "task2", 2013], "value": 14},
{"key": ["user1", "task3", 2013], "value": 15},
{"key": ["user2", "task1", 2012], "value": 9},
{"key": ["user2", "task4", 2012], "value": 26},
{"key": ["user2", "task4", 2013], "value": 53},
{"key": ["user3", "task1", 2013], "value": 5}
]}