Moving averages with MongoDB's aggregation framework? - mongodb

If you have 50 years of temperature weather data (daily) (for example) how would you calculate moving averages, using 3-month intervals, for that time period? Can you do that with one query or would you have to have multiple queries?
Example Data
01/01/2014 = 40 degrees
12/31/2013 = 38 degrees
12/30/2013 = 29 degrees
12/29/2013 = 31 degrees
12/28/2013 = 34 degrees
12/27/2013 = 36 degrees
12/26/2013 = 38 degrees
.....

The agg framework now has $map and $reduce and $range built in so array processing is much more straightfoward. Below is an example of calculating moving average on a set of data where you wish to filter by some predicate. The basic setup is each doc contains filterable criteria and a value, e.g.
{sym: "A", d: ISODate("2018-01-01"), val: 10}
{sym: "A", d: ISODate("2018-01-02"), val: 30}
Here it is:
// This controls the number of observations in the moving average:
days = 4;
c=db.foo.aggregate([
// Filter down to what you want. This can be anything or nothing at all.
{$match: {"sym": "S1"}}
// Ensure dates are going earliest to latest:
,{$sort: {d:1}}
// Turn docs into a single doc with a big vector of observations, e.g.
// {sym: "A", d: d1, val: 10}
// {sym: "A", d: d2, val: 11}
// {sym: "A", d: d3, val: 13}
// becomes
// {_id: "A", prx: [ {v:10,d:d1}, {v:11,d:d2}, {v:13,d:d3} ] }
//
// This will set us up to take advantage of array processing functions!
,{$group: {_id: "$sym", prx: {$push: {v:"$val",d:"$date"}} }}
// Nice additional info. Note use of dot notation on array to get
// just scalar date at elem 0, not the object {v:val,d:date}:
,{$addFields: {numDays: days, startDate: {$arrayElemAt: [ "$prx.d", 0 ]}} }
// The Juice! Assume we have a variable "days" which is the desired number
// of days of moving average.
// The complex expression below does this in python pseudocode:
//
// for z in range(0, size of value vector - # of days in moving avg):
// seg = vector[n:n+days]
// values = seg.v
// dates = seg.d
// for v in seg:
// tot += v
// avg = tot/len(seg)
//
// Note that it is possible to overrun the segment at the end of the "walk"
// along the vector, i.e. not enough date-values. So we only run the
// vector to (len(vector) - (days-1).
// Also, for extra info, we also add the number of days *actually* used in the
// calculation AND the as-of date which is the tail date of the segment!
//
// Again we take advantage of dot notation to turn the vector of
// object {v:val, d:date} into two vectors of simple scalars [v1,v2,...]
// and [d1,d2,...] with $prx.v and $prx.d
//
,{$addFields: {"prx": {$map: {
input: {$range:[0,{$subtract:[{$size:"$prx"}, (days-1)]}]} ,
as: "z",
in: {
avg: {$avg: {$slice: [ "$prx.v", "$$z", days ] } },
d: {$arrayElemAt: [ "$prx.d", {$add: ["$$z", (days-1)] } ]}
}
}}
}}
]);
This might produce the following output:
{
"_id" : "S1",
"prx" : [
{
"avg" : 11.738793632512115,
"d" : ISODate("2018-09-05T16:10:30.259Z")
},
{
"avg" : 12.420766702631376,
"d" : ISODate("2018-09-06T16:10:30.259Z")
},
...
],
"numDays" : 4,
"startDate" : ISODate("2018-09-02T16:10:30.259Z")
}

The way I would tend to do this in MongoDB is maintain a running sum of the past 90 days in the document for each day's value, e.g.
{"day": 1, "tempMax": 40, "tempMaxSum90": 2232}
{"day": 2, "tempMax": 38, "tempMaxSum90": 2230}
{"day": 3, "tempMax": 36, "tempMaxSum90": 2231}
{"day": 4, "tempMax": 37, "tempMaxSum90": 2233}
Whenever a new data point needs to be added to the collection, instead of reading and summing 90 values you can efficiently calculate the next sum with two simple queries, one addition and one subtraction like this (psuedo-code):
tempMaxSum90(day) = tempMaxSum90(day-1) + tempMax(day) - tempMax(day-90)
The 90-day moving average for at each day is then just the 90-day sum divided by 90.
If you wanted to also offer moving averages over different time-scales, (e.g. 1 week, 30 day, 90 day, 1 year) you could simply maintain an array of sums with each document instead of a single sum, one sum for each time-scale required.
This approach costs additional storage space and additional processing to insert new data, however is appropriate in most time-series charting scenarios where new data is collected relatively slowly and fast retrieval is desirable.

The accepted answer helped me, but it took a while for me to understand how it worked and so I thought i'd explain my method to help others out. Particularly in your context I think my answer will help
This works on smaller datasets ideally
First group the data by day, then append all days in an array to each day:
{
"$sort": {
"Date": -1
}
},
{
"$group": {
"_id": {
"Day": "$Date",
"Temperature": "$Temperature"
},
"Previous Values": {
"$push": {
"Date": "$Date",
"Temperature": "$Temperature"
}
}
}
This will leave you with a record that looks like this (it'll be ordered correctly):
{"_id.Day": "2017-02-01",
"Temperature": 40,
"Previous Values": [
{"Day": "2017-03-01", "Temperature": 20},
{"Day": "2017-02-11", "Temperature": 22},
{"Day": "2017-01-18", "Temperature": 03},
...
]},
Now that each day has all days appended to it, we need to remove the items from the Previous Values array that are more recent than the this _id.Day field, as the moving average is backward looking:
{
"$project": {
"_id": 0,
"Date": "$_id.Date",
"Temperature": "$_id.Temperature",
"Previous Values": 1
}
},
{
"$project": {
"_id": 0,
"Date": 1,
"Temperature": 1,
"Previous Values": {
"$filter": {
"input": "$Previous Values",
"as": "pv",
"cond": {
"$lte": ["$$pv.Date", "$Date"]
}
}
}
}
},
Each item in the Previous Values array will only contain the dates that are less than or equal to the date for each record:
{"Day": "2017-02-01",
"Temperature": 40,
"Previous Values": [
{"Day": "2017-01-31", "Temperature": 33},
{"Day": "2017-01-30", "Temperature": 36},
{"Day": "2017-01-29", "Temperature": 33},
{"Day": "2017-01-28", "Temperature": 32},
...
]}
Now we can pick our average window size, since the data is by day, for week we'd take the first 7 records of the array; for monthly, 30; or 3-monthly, 90 days:
{
"$project": {
"_id": 0,
"Date": 1,
"Temperature": 1,
"Previous Values": {
"$slice": ["$Previous Values", 0, 90]
}
}
},
To average the previous temperatures we unwind the Previous Values array then group by the date field. The unwind operation does this:
{"Day": "2017-02-01",
"Temperature": 40,
"Previous Values": {
"Day": "2017-01-31",
"Temperature": 33}
},
{"Day": "2017-02-01",
"Temperature": 40,
"Previous Values": {
"Day": "2017-01-30",
"Temperature": 36}
},
{"Day": "2017-02-01",
"Temperature": 40,
"Previous Values": {
"Day": "2017-01-29",
"Temperature": 33}
},
...
See that the Day field is the same, but we now have a document for each of the previous dates from the Previous Values array. Now we can group back on day, then average Previous Values.Temperature to get the moving average:
{"$group": {
"_id": {
"Day": "$Date",
"Temperature": "$Temperature"
},
"3 Month Moving Average": {
"$avg": "$Previous Values.Temperature"
}
}
}
That's it! I know that joining every record to every record isn't ideal, but this works fine on smaller datasets

Starting in Mongo 5, it's a perfect use case for the new $setWindowFields aggregation operator:
Note that I'm consider the rolling average to have a 3-days window for simplicity (today and the 2 previous days):
// { date: ISODate("2013-12-26"), temp: 38 }
// { date: ISODate("2013-12-27"), temp: 36 }
// { date: ISODate("2013-12-28"), temp: 34 }
// { date: ISODate("2013-12-29"), temp: 31 }
// { date: ISODate("2013-12-30"), temp: 29 }
// { date: ISODate("2013-12-31"), temp: 38 }
// { date: ISODate("2014-01-01"), temp: 40 }
db.collection.aggregate([
{ $setWindowFields: {
sortBy: { date: 1 },
output: {
movingAverage: {
$avg: "$temp",
window: { range: [-2, "current"], unit: "day" }
}
}
}}
])
// { date: ISODate("2013-12-26"), temp: 38, movingAverage: 38 }
// { date: ISODate("2013-12-27"), temp: 36, movingAverage: 37 }
// { date: ISODate("2013-12-28"), temp: 34, movingAverage: 36 }
// { date: ISODate("2013-12-29"), temp: 31, movingAverage: 33.67 }
// { date: ISODate("2013-12-30"), temp: 29, movingAverage: 31.33 }
// { date: ISODate("2013-12-31"), temp: 38, movingAverage: 32.67 }
// { date: ISODate("2014-01-01"), temp: 40, movingAverage: 35.67 }
This:
sorts chronologically sorts documents: sortBy: { date: 1 }
creates for each document a span of documents (the window) that:
includes the "current" document and all previous documents within a "2"-"day" window
and within that window, averages temperatures: $avg: "$temp"

I think I may have an answer for my own question. Map Reduce would do it. First use emit to map each document to it's neighbors that it should be averaged with, then use reduce to avg each array... and that new array of averages should be the moving averages plot overtime since it's id would be the new date interval that you care about
I guess I needed to understand map-reduce better ...
:)
For instance... if we wanted to do it in memory (later we can create collections)
GIST https://gist.github.com/mrgcohen/3f67c597a397132c46f7
Does that look right?

I don't believe the aggregation framework can do this for multiple dates in the current version (2.6), or, at least, can't do this without some serious gymnastics. The reason is that the aggregation pipeline processes one document at a time and one document only, so it would be necessary to somehow create a document for each day that contains the previous 3 months worth of relevant information. This would be as a $group stage that would calculate the average, meaning that the prior stage would have produced about 90 copies of each day's record with some distinguishing key that can be used for the $group.
So I don't see a way to do this for more than one date at a time in a single aggregation. I'd be happy to be wrong and have to edit/remove this answer if somebody finds a way to do it, even if it's so complicated it's not practical. A PostgreSQL PARTITION type function would do the job here; maybe that function will be added someday.

Related

Mongodb relative frequency in grouping aggregation

I have data that looks like this
{"customer_id":1, "amount": 100, "item": "a"}
{"customer_id":1, "amount": 20, "item": "b"}
{"customer_id":2, "amount": 25, "item": "a"}
{"customer_id":3, "amount": 10, "item": "a"}
{"customer_id":4, "amount": 10, "item": "b"}
Using R I can get an overview of relative frequencies very easily by doing this
data %>%
group_by(customer_id,item) %>%
summarise(total=sum(amount)) %>%
mutate(per_customer_spend=total/sum(total))
Which returns;
customer_id item total per_customer_spend
<dbl> <chr> <dbl> <dbl>
1 1 a 100 0.833
2 1 b 20 0.167
3 2 a 25 1
4 3 a 10 1
5 4 b 10 1
I can't figure out how to do this in Mongo efficiently, the best solution I have so far involves multiple groups and pushing and unwinding.
If you don't want to change the data structure there's no way around grouping all the data as we need to determine the total amount spent of each user, though this would require just a single $group stage and a single $uwind stage, it would look somethine like this:
db.collection.aggregate([
{
$group: {
_id: "$customer_id",
total: {$sum: "$amount"},
rootHolder: {$push: "$$ROOT"}
}
},
{
$unwind: "$rootHolder"
},
{
$project: {
newRoot: {
$mergeObjects: [
"$rootHolder",
{total: "$total"}
]
}
}
},
{
$replaceRoot: {
newRoot: "$newRoot"
}
},
{
$project: {
customer_id: 1,
item: 1,
total: "$amount",
per_customer_spend: {$divide: ["$amount", "$total"]}
}
}
])
With that said, especially when scale increases this pipeline becomes very expensive, Now depending on how big the scale is and the amount of unique pairs of costumer_id x item i would advice the following:
considering Mongo doesn't like data duplication and assuming a user does not "buy" new items too often it might be worth to actually save it as a field in the current collection. (which requires updating all the users items on purchase), I know this sounds "weird" and costly but again depending on frequency of purchases it might actually be worth it.
Assuming you decide not to do the above I would instead create a new collection with customer_total and customer_id. Mind you this field will still require upkeeping although much cheaper.
With this collection you can either $lookup the total (which again can be expensive).

Put properties with different name in one field in MongoDB

I am getting requests from different devices as Json. Some of them show temperature as "T", some other as "temp" and it can be different in other devices. is that possible to define in MongoDB to put all of these values in single field "temperature"?
Doesn't matter if it is "temp" or "T" or "tempC", just put all of them in "temperature" field.
Here is an example of my data:
[
{ "ip": "12:3B:6A:1A:E6:8B", "type": 0, "t": 37},
{ "ip": "22:33:66:1A:E6:8B", "type": 1, "temperature": 40},
{ "ip": "1A:3C:6A:1A:E6:8B", "type": 1, "temp": 30}
]
I want to put temp, t and temperature in Temperature field in my collection.
You can use $ifNull operator to control which value should be transferred into your output, like below:
db.col.aggregate([
{
$addFields: { Temperature: { $ifNull: [ { $ifNull: [ "$t", "$temperature"] }, "$temp" ] } }
},
{
$project: {
t: 0,
temperature: 0,
temp: 0
}
}
])
This will merge that three fields into one Temperature taking first not empty value. Additionally if you want to update your collection, you can add $out as a last aggregation stage like { $out: col } but keep in mind that it will entirely replace your source collection.
I think mongodb supports regular expression but they are meant to search datas, not to insert them based on fieldname matches.
I am quite sure you shall use some kind of facade in front of your database to achieve that.

MongoDB count concurrent

I have a collection that has a start_time and end_time denoting a session
I need to count max concurrent sessions in a given hour.
something like aggregate and group by the hour.
what's the most efficient way to do this?
Your Query to do so will be something like:
db.collection_name.aggregate(
[ { $group : { _id : $hour, no_of_sessions : { $sum:1 } } } ]
)
Here: $hour is your time variable (assuming you are just storing the hour, if not you can apply (hour: { $hour: "$date" }) function to get it from date).
If hours are like 1:01 to 2:59 then you will need to define _id as compound key. something like: _id: {start_time: $start_time , end_time : $end_time}.
To get a more specific answer, please give the exact case.
Cheers!
The problems with this type of aggregation come in that a "session" with a "start_time" and an "end_time" can actually therefore "emit" hours that cross each grouped hour, so it is present in more that one hourly time period until the session ends. This can potentially span hours
The other main problem here is that the session may indeeed "start" before the time period you want to look at, or even end "after" the specified range, such as a day. Here you need to consider that you are generally looking for a "start_time" that is less than the end of the day you are looking at, and that the "end_time" is greater than the start of the day you are looking at.
Even so there are other considerations, such as does something have an "end_time" at all at the time it is analysed? Usually the best way to deal with this is consider a reasonable "session life" value, and factor that in to the basic query selection.
So with a few variables at play, we basically come of with the "base criteria" for selection:
var startDay = new Date("2015-08-30"),
endDay = new Date("2015-08-31"),
oneHour = 1000*60*60,
sessionTime = 3*oneHour;
var query = {
"start_time": {
"$gte": new Date(startDay.valueOf()-sessionTime),
"$lt": endDay
},
"$or": [
{ "end_time": { "$exists": false } },
{ "end_time": null },
{ "end_time": {
"$lt": new Date(endDay.valueOf()+sessionTime),
"$gte": startDay
}}
]
};
Working with a 3 hour window here for example to also include found dates outside of the current day to include in "possible" output.
Next consider some data to work with as a sample:
{ "_id": 1, "start_time": new Date("2015-08-29T23:30"), "end_time": new Date("2015-08-29T23:45") },
{ "_id": 2, "start_time": new Date("2015-08-29T23:30"), "end_time": new Date("2015-08-30T00:45") },
{ "_id": 3, "start_time": new Date("2015-08-30T00:30"), "end_time": new Date("2015-08-30T01:30") },
{ "_id": 4, "start_time": new Date("2015-08-30T01:30"), "end_time": new Date("2015-08-30T01:45") },
{ "_id": 5, "start_time": new Date("2015-08-30T01:30"), "end_time": new Date("2015-08-30T03:45") },
{ "_id": 6, "start_time": new Date("2015-08-30T01:45"), "end_time": new Date("2015-08-30T02:30") },
{ "_id": 7, "start_time": new Date("2015-08-30T23:30"), "end_time": null },
{ "_id": 8, "start_time": new Date("2015-08-30T23:30") },
{ "_id": 9, "start_time": new Date("2015-08-31T01:30") }
If we look at the criteria for the date range and the general query selection, then you can expect that records 2 through 8 would be considered in the day we are looking at as they either "ended" within the day or "started" within the day. The "session window" is mainly because some data does not have an "end_time", being either null or not present. That "window" helps filter out other irrelevant data that may be from more recent dates than what is being looked at, and keeps the size reasonable.
A quick visual scan should tell you that the counts per hour should be this:
0: 2
1: 4,
2: 2,
3: 1
23: 2
The actual process is better handled with mapReduce than any other aggregation medium. This is because the conditional logic required allows a "single document" to be "emitted" as a value valid for multiple periods. So there is an inherrent "looping" required here
db.sessions.mapReduce(
function() {
var oneHour = 1000*60*60,
start = (this.start_time > startDay)
? ( this.start_time.valueOf() - ( this.start_time.valueOf() % oneHour ))
: startDay,
end = (this.hasOwnProperty("end_time") && this.end_time != null)
? ( this.end_time.valueOf() - ( this.end_time.valueOf() % oneHour ))
: endDay;
// Uncomment to Emit blank values for each hour on first iteration
/*
if ( count == 0 ) {
for ( var x = 1; x <= 24; x++ ) {
emit(x,0);
}
count++;
}
*/
for ( var y = start; y <= end && (y-startDay)/oneHour < 24; y+= oneHour) {
emit(
(y-startDay ==0) ? 0 : ((y-startDay)/oneHour)
,1
);
}
},
function(key,values) {
return Array.sum(values);
},
{
"out": { "inline": 1 },
"scope": {
"startDay": startDay.valueOf(),
"endDay": endDay.valueOf(),
"count": 0
},
"query": query
}
)
Combined with the variable set earlier, this will correcty count how many sessions are currently running within each hour:
"results" : [
{
"_id" : 0,
"value" : 2
},
{
"_id" : 1,
"value" : 4
},
{
"_id" : 2,
"value" : 2
},
{
"_id" : 3,
"value" : 1
},
{
"_id" : 23,
"value" : 2
}
],
The basic actions for each record are:
Round out the start and end time each to 1 hour
Replace each value with either the startDay for the day being looked at or the endDay where the start was before the current day or the end_time is not present
From the start time, loop with one hour increments until the end time is reached or one day difference is reached. Each emission is a "count" for the hours difference from the startDay.
Reduce to sum the totals per each hour
There is an optional section which will also emit 0 values for every hour of the day, so that if no data is recorded then at least there is output for that hour as 0.

Get last record for several items at once with mongo

In my mongo database, I have basically 2 collections:
pupils
{_id: ObjectID(539ab7ffefbb93120c9697f7), firstname: 'Arnold', lastname: 'Smith'}
{_id: ObjectID(539ab7ffefbb93120c5473c3), firstname: 'Steven', lastname: 'Jens'}
marks
{ date: '2014-06-12', value: 12, pupilID: 539ab7ffefbb93120c9697f7}
{ date: '2014-06-05', value: 9, pupilID: 539ab7ffefbb93120c9697f7}
{ date: '2014-05-10', value: 17, pupilID: 539ab7ffefbb93120c9697f7}
{ date: '2014-05-10', value: 7, pupilID: 539ab7ffefbb93120c5473c3}
Is there a way with mongoshell to get the last mark of each pupils without having to manually loop through the list of pupils and get the last mark for each one ?
Currently I loop through each pupils and perform a:
db.marks.find({pupilID: pupilID}).sort({_id: -1}).limit(1)
But I'm quite concerned regarding the performances if the marks collections contains a high number of items.
Well your dates are not the best example here as they are strings. You should convert them to proper "Date" types, but at least they are lexical for sorting.
Not the "join" you seem to be implicitly looking for, but you can get the $last mark for each student from your "marks" collection, which will probably do some way to helping your result:
db.marks.aggregate([
{ "$sort": { "date": 1 } },
{ "$group": {
"_id": "$pupilID",
"date": { "$last": "$date" },
"value": { "$last": "$value" }
}}
]}
And that will give you the last mark "value" by date for each "pupilID". The joining of data is up to you, but this is better than looping whole collections or otherwise firing off on query per "pupil".

How to return index of array item in Mongodb?

The document is like below.
{
"title": "Book1",
"dailyactiviescores":[
{
"date": 2013-06-05,
"score": 10,
},
{
"date": 2013-06-06,
"score": 21,
},
]
}
The daily active score is intended to increase once the book is opened by a reader. The first solution comes to mind is use "$" to find whether target date has a score or not, and deal with it.
err = bookCollection.Update(
{"title":"Book1", "dailyactivescore.date": 2013-06-06},
{"$inc":{"dailyactivescore.$.score": 1}})
if err == ErrNotFound {
bookCollection.Update({"title":"Book1"}, {"$push":...})
}
But I cannot help to think is there any way to return the index of an item inside array? If so, I could use one query to do the job rather than two. Like this.
index = bookCollection.Find(
{"title":"Book1", "dailyactivescore.date": 2013-06-06}).Select({"$index"})
if index != -1 {
incTarget = FormatString("dailyactivescore.%d.score", index)
bookCollection.Update(..., {"$inc": {incTarget: 1}})
} else {
//push here
}
Incrementing a field that's not present isn't the issue as doing $inc:1 on it will just create it and set it to 1 post-increment. The issue is when you don't have an array item corresponding to the date you want to increment.
There are several possible solutions here (that don't involve multiple steps to increment).
One is to pre-create all the dates in the array elements with scores:0 like so:
{
"title": "Book1",
"dailyactiviescores":[
{
"date": 2013-06-01,
"score": 0,
},
{
"date": 2013-06-02,
"score": 0,
},
{
"date": 2013-06-03,
"score": 0,
},
{
"date": 2013-06-04,
"score": 0,
},
{
"date": 2013-06-05,
"score": 0,
},
{
"date": 2013-06-06,
"score": 0
}, { etc ... }
]
}
But how far into the future to go? So one option here is to "bucket" - for example, have an activities document "per month" and before the start of a month have a job that creates the new documents for next month. Slightly yucky. But it'll work.
Other options involve slight changes in schema.
You can use a collection with book, date, activity_scores. Then you can use a simple upsert to increment a score:
db.books.update({title:"Book1", date:"2013-06-02", {$inc:{score:1}}, {upsert:true})
This will increment the score or insert new record with score:1 for this book and date and your collection will look like this:
{
"title": "Book1",
"date": 2013-06-01,
"score": 10,
},
{
"title": "Book1",
"date": 2013-06-02,
"score": 1,
}, ...
Depending on how much you simplified your example from your real use case, this might work well.
Another option is to stick with the array but switch to using the date string as a key that you increment:
Schema:
{
"title": "Book1",
"dailyactiviescores":{
{ "2013-06-01":10},
{ "2013-06-02":8}
}
}
Note it's now a subdocument and not an array and you can do:
db.books.update({title:"Book1"}, {"dailyactivityscores.2013-06-03":{$inc:1}})
and it will add a new date into the subdocument and increment it resulting in:
{
"title": "Book1",
"dailyactiviescores":{
{ "2013-06-01":10},
{ "2013-06-02":8},
{ "2013-06-03":1}
}
}
Note it's now harder to "add-up" the scores for the book so you can atomically also update a "subtotal" in the same update statement whether it's for all time or just for the month.
But here it's once again problematic to keep adding days to this subdocument - what happens when you're still around in a few years and these book documents grow hugely?
I suspect that unless you will only be keeping activity scores for the last N days (which you can do with capped array feature in 2.4) it will be simpler to have a separate collection for book-activity-score tracking where each book-day is a separate document than to embed the scores for each day into the book in a collection of books.
According to the docs:
The $inc operator increments a value of a field by a specified amount.
If the field does not exist, $inc sets the field to the specified
amount.
So, if there won't be a score field in the array item, $inc will set it to 1 in your case, like this:
{
"title": "Book1",
"dailyactiviescores":[
{
"date": 2013-06-05,
"score": 10,
},
{
"date": 2013-06-06,
},
]
}
bookCollection.Update(
{"title":"Book1", "dailyactivescore.date": 2013-06-06},
{"$inc":{"dailyactivescore.$.score": 1}})
will result into:
{
"title": "Book1",
"dailyactiviescores":[
{
"date": 2013-06-05,
"score": 10,
},
{
"date": 2013-06-06,
"score": 1
},
]
}
Hope that helps.