Azure Cosmos DB api SQL last mesure for each element - nosql

I have a Cosmos DB container "mesure" like this :
{
"cdreseau": "035000544",
"date": "2020-12-09",
"element": "PH",
"val": 7.1
}
{
"cdreseau": "035000544",
"date": "2020-14-09",
"element": "CA",
"val": 20.1
}...
I would like ton find last mesure value and date for each element in a "cdreseau"
I can get last date foreach element with that :
SELECT MAX(c.date) as date,c.element FROM c where c.cdreseau='040000422' group by c.element
But how can i get the c.val of the item in the same request ? /date,/cdreseau,/element is unique key
regards

This can't achieve by SQL in Cosmos DB. You can create a new container which save the latest values of your elements. Or you can implement a materialized view using change feed.
Ref: How do I get the latest record for each item in CosmosDB using SQL

Related

Upsert timeseries in Mongodb v5 - v6

I'm reading the documentation about Timeseries in Mongodb v5 - v6 and I don't understand if it's possible to upsert a record after it has been saved; for example if I have a record like this (the "name" field is the "metadata" ):
{
_id: ObjectId("6560a0ef02a1877734a9df66")
timestamp: 2022-11-24T01:00:00.000Z,
name: 'sensor1',
pressure: 5,
temperature: 25
}
is it possible to update the value of the "pressure" field after the record has been saved?
From the official mongo documentation, inside the "Time Series Collection Limitations" section, I read that: The update command may only modify the metaField field value.
Is there a way to upsert also other field? Thanks a lot.
No, updating the pressure field in your example is impossible with update alone, and upsert doesn't exist for time series collections.
The only functions currently available for time series collections are Delete and Update, but they only work on the metaField values, so in your example, we can only update/rename 'sensor1'.
The only workaround I know to update values is as follows:
Get a copy of all documents matched on the metaField values.
Update desired values on the copied documents.
Delete the original documents from the database
Insert your new copy of the documents into the database.
Here's a way to update values on a time series collections, using the MongoDB Shell (mongosh)
First, we create a test database. The important part here is the metaField named "metadata." This field will be an object/dictionary that stores multiple fields.
db.createCollection(
"test_coll",
{
timeseries: {
timeField: "timestamp",
metaField: "metadata",
granularity: "hours"
}
}
)
Then we add some test data to the collection. Note the 'metadata' is an object/dictionary that stores two fields named
sensorName and sensorLocation.
db.test_coll.insertMany( [
{
"metadata": { "sensorName": "sensor1", "sensorLocation": "outside"},
"timestamp": ISODate("2022-11-24T01:00:00.000Z"),
"pressure": 5,
"temperature": 32
},
{
"metadata": { "sensorName": "sensor1", "sensorLocation": "outside" },
"timestamp": ISODate("2022-11-24T02:00:00.000Z"),
"pressure": 6,
"temperature": 35
},
{
"metadata": { "sensorName": "sensor2", "sensorLocation": "inside" },
"timestamp": ISODate("2022-11-24T01:00:00.000Z"),
"pressure": 7,
"temperature": 72
},
] )
In your example we want to update the 'pressure' field which currently holds the pressure value of 5. So, we need to find all documents where the metaField 'metadata.sensorName' has a value of 'sensor1' and store all the found documents in a variable called old_docs.
var old_docs = db.test_coll.find({ "metadata.sensorName": "sensor1" })
Next, we loop through the documents (old_docs), updating them as needed. We add the documents (updated or not) to a variable named updated_docs. In this example, we are looping through all 'sensor1' documents, and if the timestamp is equal to '2022-11-24T01:00:00.000Z' we update the 'pressure' field with the value 555 ( which was initially 5 ). Alternatively, we could search for a specific _id here instead of a particular timestamp.
Note that there is a 'pressure' value of 7 at the
timestamp 2022-11-24T01:00:00.000Z, as well, but its value will remain the same because we are only looping through all 'sensor1' documents, so the document with sensorName set to sensor2 will not be updated.
var updated_docs = [];
while (old_docs.hasNext()) {
var doc = old_docs.next();
if (doc.timestamp.getTime() == ISODate("2022-11-24T01:00:00.000Z").getTime()){
print(doc.pressure)
doc.pressure = 555
}
updated_docs.push(doc)
}
We now have a copy of all the documents for 'sensor1' and we have updated our desired fields.
Next, we delete all documents with the metaField 'metadata.sensorName' equal to 'sensor1' ( on an actual database, please don't forget to backup first )
db.test_coll.deleteMany({ "metadata.sensorName": "sensor1" })
And finally, we insert our updated documents into the database.
db.test_coll.insertMany(updated_docs)
This workaround will update values, but it will not upsert them.

Return Only Most Recent Record From Related Entity in OData Query

I am trying to create an OData Query to return Bugs from Azure DevOps for a PowerBI report, but I am not getting the results I am looking for, as one of the Related Entities that I am trying to expand returns multiple results.
My base Query looks like this (simplified & removing custom fields)
https://analytics.dev.azure.com/[organization]/[project]/_odata/v3.0-preview/WorkItems?$select=WorkItemId,WorkItemType,Title,State,LeadTimeDays&$filter=WorkItemType eq 'bug'&$expand=Teams($select=TeamName,AnalyticsUpdatedDate)
Some records return multiple Team Names in the JSON Response
"value": [
{
"WorkItemId": 16547,
"LeadTimeDays": 173.0639004,
"Title": "test",
"WorkItemType": "Bug",
"State": "Closed",
"Severity": "3 - Medium",
"Teams": [
{
"TeamName": "Team1",
"AnalyticsUpdatedDate": "2019-09-17T01:48:46.5433333Z"
},
{
"TeamName": "Team2",
"AnalyticsUpdatedDate": "2019-12-03T16:52:39.9466667Z"
}
]
}
]
I can't tell why these records have multiple values for this Entity, but I only need the most recent (Team 2 in the example above). Is it possible to return only the most recent record for the Related Teams Entity? I've tried using orderby and top on the expand clause and other places in the query to no effect. If I can't do it in the OData query, then I can accomplish it in Power BI after expanding the Table.
I found how to solve this. I needed semicolons between the clauses within the Expand clause.
https://analytics.dev.azure.com/[organization]/[projet]_odata/v3.0-preview/WorkItems?$select=WorkItemId,WorkItemType,Title,State,LeadTimeDays&$filter=WorkItemType eq 'bug'&$expand=Teams($select=TeamName,AnalyticsUpdatedDate;$orderby=AnalyticsUpdatedDate desc;$top=1)

how to query JSON in mongodb

I have following JSON. I want to store it in MongoDB as Json and query it. How can I do it?
JSON
{
"id": 4,
"user": {
"firstname":"Finn",
"lastname":"Balor",
"email":["fb#wwe.com","fb1#wwe.com"],
"password":"whateverHisTaglineIs",
"address":{
"street":"64 victoria street",
"country":"UK"
}
}
}
I am storing this as the following document
> db.users.insert({id: 4,user: {firstname:"Finn",lastname:"Balor",email:["fb#wwe.com","fb1#wwe.com"],password:"whateverHisTaglineIs",address:{street:"64 victoria street", country:"UK"}}})
how can I query this record using country as selector?
You need to query like below, go through MongoDB documentation and learn MongoDB from MongoDB University.
db.users.find({"user.address.country": "UK"})

mongodb-php: "key" side value for nested querying of find() function doesnot work

I want to retrive record which are matching to booking's client id & want to show it to client. I am doing the following:
$mongoDb = $mongoDb->selectCollection('booking');
$bookingInfo = $mongoDb->find(array("client.id" => $_SESSION['client_id']));
My mongo database record looks like this:
"paymentDue": "",
"client": {
"contacts": [
{
"name": "loy furison",
"email": "loy#hotmail.com"
}
],
"id": "5492abba64363df013000029",
"name": "Birdfire"
},
want to fire the query with key value as client.id in find function. But this query doesnt work..whats the issue
I got a little logic that is different by key name only. If i find it with client.name then i shows me records & there i need to insert these in json object & then through foreach loop each record if i retrive & compare then it works...got it but the expected doesnt work why?????...didnt get:-!

How do I manage a sublist in Mongodb?

I have different types of data that would be difficult to model and scale with a relational database (e.g., a product type)
I'm interested in using Mongodb to solve this problem.
I am referencing the documentation at mongodb's website:
http://docs.mongodb.org/manual/tutorial/model-referenced-one-to-many-relationships-between-documents/
For the data type that I am storing, I need to also maintain a relational list of id's where this particular product is available (e.g., store location id's).
In their example regarding "one-to-many relationships with embedded documents", they have the following:
{
name: "O'Reilly Media",
founded: 1980,
location: "CA",
books: [12346789, 234567890, ...]
}
I am currently importing the data with a spreadsheet, and want to use a batchInsert.
To avoid duplicates, I assume that:
1) I need to do an ensure index on the ID, and ignore errors on the insert?
2) Do I then need to loop through all the ID's to insert a new related ID to the books?
Your question could possibly be defined a little better, but let's consider the case that you have rows in a spreadsheet or other source that are all de-normalized in some way. So in a JSON representation the rows would be something like this:
{
"publisher": "O'Reilly Media",
"founded": 1980,
"location": "CA",
"book": 12346789
},
{
"publisher": "O'Reilly Media",
"founded": 1980,
"location": "CA",
"book": 234567890
}
So in order to get those sort of row results into the structure you wanted, one way to do this would be using the "upsert" functionality of the .update() method:
So assuming you have some way of looping the input values and they are identified with some structure then an analog to this would be something like:
books.forEach(function(book) {
db.publishers.update(
{
"name": book.publisher
},
{
"$setOnInsert": {
"founded": book.founded,
"location": book.location,
},
"$addToSet": { "books": book.book }
},
{ "upsert": true }
);
})
This essentially simplified the code so that MongoDB is doing all of the data collection work for you. So where the "name" of the publisher is considered to be unique, what the statement does is first search for a document in the collection that matches the query condition given, as the "name".
In the case where that document is not found, then a new document is inserted. So either the database or driver will take care of creating the new _id value for this document and your "condition" is also automatically inserted to the new document since it was an implied value that should exist.
The usage of the $setOnInsert operator is to say that those fields will only be set when a new document is created. The final part uses $addToSet in order to "push" the book values that have not already been found into the "books" array (or set).
The reason for the separation is for when a document is actually found to exist with the specified "publisher" name. In this case, all of the fields under the $setOnInsert will be ignored as they should already be in the document. So only the $addToSet operation is processed and sent to the server in order to add the new entry to the "books" array (set) and where it does not already exist.
So that would be simplified logic compared to aggregating the new records in code before sending a new insert operation. However it is not very "batch" like as you are still performing some operation to the server for each row.
This is fixed in MongoDB version 2.6 and above as there is now the ability to do "batch" updates. So with a similar analog:
var batch = [];
books.forEach(function(book) {
batch.push({
"q": { "name": book.publisher },
"u": {
"$setOnInsert": {
"founded": book.founded,
"location": book.location,
},
"$addToSet": { "books": book.book }
},
"upsert": true
});
if ( ( batch.length % 500 ) == 0 ) {
db.runCommand( "update", "updates": batch );
batch = [];
}
});
db.runCommand( "update", "updates": batch );
So what is doing in setting up all of the constructed update statements into a single call to the server with a sensible size of operations sent in the batch, in this case once every 500 items processed. The actual limit is the BSON document maximum of 16MB so this can be altered appropriate to your data.
If your MongoDB version is lower than 2.6 then you either use the first form or do something similar to the second form using the existing batch insert functionality. But if you choose to insert then you need to do all the pre-aggregation work within your code.
All of the methods are of course supported with the PHP driver, so it is just a matter of adapting this to your actual code and which course you want to take.