Aggregate and select only top records with $last - mongodb

I have following collection in MongoDB:
{
"_id" : ObjectId("..."),
"assetId" : "...",
"date" : ISODate("..."),
...
}
I need to do quite simple thing - find latest record for each device/asset. I have following query:
db.collection.aggregate([
{ "$match" : { "assetId" : { "$in" : [ up_to_80_ids ]} } },
{ "$group" :{ "_id" : "$assetId" , "date" : { "$last" : "$date"}}}
])
Whole table is around 20Gb. When I am trying to do this query it takes around 8 seconds which does not make any sense, as far as I specified that only $last record should be selected. Both assetId and date are indexed. If I add { $sort : { date : 1 } } before group it does not change anything.
Basically, result of my query should NOT depend on data size. The only thing I need is a top record for each device/asset. If I do instead 80 separate queries it takes me few milliseconds.
Is there any way to make MongoDB to do NOT go through whole table? It looks like database does not reduce but processes everything?! Well, I understand that there should be some good reason for this behaviour but I cannot find anything in documentation or on the forums.
UPDATE:
Eventually found right syntax of explain query for 2.4.6:
db.runCommand( { aggregate: "collection", pipeline : [...] , explain : true })
Result:
{
"serverPipeline" : [
{
"query" : {
"assetId" : {
"$in" : [
"52744d5722f8cb9b4f94d321",
"52791fe322f8014b320dae41",
"52740f5222f8cb9b4f94d306",
... must remove some because of SO limitations
"52744d1722f8cb9b4f94d31d",
"52744b1d22f8cb9b4f94d308",
"52744ccd22f8cb9b4f94d319"
]
}
},
"projection" : {
"assetId" : 1,
"date" : 1,
"_id" : 0
},
"cursor" : {
"cursor" : "BtreeCursor assetId_1 multi",
"isMultiKey" : false,
"n" : 960881,
"nscannedObjects" : 960881,
"nscanned" : 960894,
"nscannedObjectsAllPlans" : 960881,
"nscannedAllPlans" : 960894,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 9,
"nChunkSkips" : 0,
"millis" : 6264,
"indexBounds" : {
"assetId" : [
[
"52740baa22f8cb9b4f94d2e8",
"52740baa22f8cb9b4f94d2e8"
],
[
"52740bed22f8cb9b4f94d2e9",
"52740bed22f8cb9b4f94d2e9"
],
[
"52740c3222f8cb9b4f94d2ea",
"52740c3222f8cb9b4f94d2ea"
],
....
[
"5297770a22f82f9bdafce322",
"5297770a22f82f9bdafce322"
],
[
"529df5f622f82f9bdafce429",
"529df5f622f82f9bdafce429"
],
[
"529f6a6722f89deaabbf9881",
"529f6a6722f89deaabbf9881"
],
[
"52a6e35122f89ce6e2cf4267",
"52a6e35122f89ce6e2cf4267"
]
]
},
"allPlans" : [
{
"cursor" : "BtreeCursor assetId_1 multi",
"n" : 960881,
"nscannedObjects" : 960881,
"nscanned" : 960894,
"indexBounds" : {
"assetId" : [
[
"52740baa22f8cb9b4f94d2e8",
"52740baa22f8cb9b4f94d2e8"
],
[
"52740bed22f8cb9b4f94d2e9",
"52740bed22f8cb9b4f94d2e9"
],
[
"52740c3222f8cb9b4f94d2ea",
"52740c3222f8cb9b4f94d2ea"
],
.......
[
"529df5f622f82f9bdafce429",
"529df5f622f82f9bdafce429"
],
[
"529f6a6722f89deaabbf9881",
"529f6a6722f89deaabbf9881"
],
[
"52a6e35122f89ce6e2cf4267",
"52a6e35122f89ce6e2cf4267"
]
]
}
}
],
"oldPlan" : {
"cursor" : "BtreeCursor assetId_1 multi",
"indexBounds" : {
"assetId" : [
[
"52740baa22f8cb9b4f94d2e8",
"52740baa22f8cb9b4f94d2e8"
],
[
"52740bed22f8cb9b4f94d2e9",
"52740bed22f8cb9b4f94d2e9"
],
[
"52740c3222f8cb9b4f94d2ea",
"52740c3222f8cb9b4f94d2ea"
],
........
[
"529df5f622f82f9bdafce429",
"529df5f622f82f9bdafce429"
],
[
"529f6a6722f89deaabbf9881",
"529f6a6722f89deaabbf9881"
],
[
"52a6e35122f89ce6e2cf4267",
"52a6e35122f89ce6e2cf4267"
]
]
}
},
"server" : "351bcc56-1a25-61b7-a435-c14e06887015.local:27017"
}
},
{
"$group" : {
"_id" : "$assetId",
"date" : {
"$last" : "$date"
}
}
}
],
"ok" : 1
}

Your explain output indicates there are 960,881 items matching the assetIds in your $match stage. MongoDB finds all of them using the index on assetId, and streams them all through the $group stage. This is expensive. At the moment MongoDB does not make very many whole-pipeline optimizations to the aggregation pipeline, so what you write is what you get, pretty much.
MongoDB could optimize this pipeline by sorting by assetId ascending and date descending, then applying the optimization suggested in SERVER-9507 but this is not yet implemented.
For the moment, your best course of action is to do this for each assetId:
db.collection.find({assetId: THE_ID}).sort({date: -1}).limit(1)

i am not sure but if you read this link on monngodb site.
it has NOTE that Only use $last when the $group follows an $sort operation. Otherwise, the result of this operation is unpredictable.

I have the same problem in my program. I have tried mongoDB MapReduce, aggregation framework and other but finally I've stopped on scanning collection using indexes and forming result on client. But now collections is too big to do that so I think I will use many small queries as you mentioned above in your question. It is not so beautifull but it will be the fastest solution IMHO.
Only the first query in your pipeline use index. The second query in pipeline accept output of the first query and it is big and not indexed. But as mentioned in Pipeline Operators and Indexes your query can use compound index so it is not so clear.
I have an idea: you can try to use many $or operators instead one $in operator like this
{ "$match": { "$or": [{"assetId": <id1>}, {"assetId": <id2>...}] } }. As I know $or operator can be executed in parallel and each query can use index. So it would be interesting to test this solution.
p.s. I really will be happy if solution for this problem will be found.

Related

Explain Aggregate framework

I Just read this link Mongodb Explain for Aggregation framework but not explain my problem
I want retrieve information about aggregation like db.coll.find({bla:foo}).explain()
I tried
db.coll.aggregate([
my-op
],
{
explain: true
})
the result is not Explain, but the query on Database.
I have tried also
db.runCommand({
aggregate: "mycoll",
pipeline: [
my- op
],
explain: true
})
I retrieved information with this command, but i haven't millis, nscannedObjects etc...
I use mongoDb 2.6.2
Aggregations don't run like traditional queries and you can't run the explain on them. They are actually classified as commands and though they make use of indexes you can't readily find out how they are being executed in real-time.
Your best bet is to take the $match portion of your aggregation and run it as a query with explain to figure out how the indexes are performing and get an idea on nscanned.
I am not sure how you managed to fail getting explain information. In 2.6.x this information is available and you can explain your aggregation results:
db.orders.aggregate([
# put your whole aggregation query
], {
explain: true
})
which gives me something like:
{
"stages" : [
{
"$cursor" : {
"query" : {
"a" : 1
},
"queryPlanner" : {
"plannerVersion" : 1,
"namespace" : "test.a",
"indexFilterSet" : false,
"parsedQuery" : {
"a" : {
"$eq" : 1
}
},
"winningPlan" : {
"stage" : "COLLSCAN",
"filter" : {
"a" : {
"$eq" : 1
}
},
"direction" : "forward"
},
"rejectedPlans" : [ ]
}
}
}
],
"ok" : 1
}

Speed up MongoDB aggregation

I have a sharded collection "my_collection" with the following structure:
{
"CREATED_DATE" : ISODate(...),
"MESSAGE" : "Test Message",
"LOG_TYPE": "EVENT"
}
The mongoDB environment is sharded with 2 shards. The above collection is sharded using Hashed shard key on LOG_TYPE. There are 7 more other possibilities for LOG_TYPE attribute.
I have 1 million documents in "my_collection" and I am trying to find the count of documents based on the LOG_TYPE using the following query:
db.my_collection.aggregate([
{ "$group" :{
"_id": "$LOG_TYPE",
"COUNT": { "$sum":1 }
}}
])
But this is getting me result in about 3 seconds. Is there any way to improve this? Also when I run the explain command, it shows that no Index has been used. Does the group command doesn't use an Index?
There are currently some limitations in what aggregation framework can do to improve the performance of your query, but you can help it the following way:
db.my_collection.aggregate([
{ "$sort" : { "LOG_TYPE" : 1 } },
{ "$group" :{
"_id": "$LOG_TYPE",
"COUNT": { "$sum":1 }
}}
])
By adding a sort on LOG_TYPE you will be "forcing" the optimizer to use an index on LOG_TYPE to get the documents in order. This will improve the performance in several ways, but differently depending on the version being used.
On real data if you have the data coming into the $group stage sorted, it will improve the efficiency of accumulation of the totals. You can see the different query plans where with $sort it will use the shard key index. The improvement this gives in actual performance will depend on the number of values in each "bucket" - in general LOG_TYPE having only seven distinct values makes it an extremely poor shard key, but it does mean that it all likelihood the following code will be a lot faster than even optimized aggregation:
db.my_collection.distinct("LOG_TYPE").forEach(function(lt) {
print(db.my_collection.count({"LOG_TYPE":lt});
});
There are a limited number of things that you can do in MongoDB, at the end of the day this might be a physical problem that extends beyond MongoDB itself, maybe latency causing configsrvs to respond untimely or results to be brought back from shards too slowly.
However you might be able to solve some performane problems by using a covered query. Since you are in fact sharding on LOG_TYPE you will already have an index on it (required before you can shard on it), not only that but the aggregation framework will auto add projection so that won't help.
MongoDB is likely having to communicate to every shard for the results, otherwise called a scatter and gather operation.
$group on its own will not use an index.
This is my results on 2.4.9:
> db.t.ensureIndex({log_type:1})
> db.t.runCommand("aggregate", {pipeline: [{$group:{_id:'$log_type'}}], explain: true})
{
"serverPipeline" : [
{
"query" : {
},
"projection" : {
"log_type" : 1,
"_id" : 0
},
"cursor" : {
"cursor" : "BasicCursor",
"isMultiKey" : false,
"n" : 1,
"nscannedObjects" : 1,
"nscanned" : 1,
"nscannedObjectsAllPlans" : 1,
"nscannedAllPlans" : 1,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 0,
"nChunkSkips" : 0,
"millis" : 0,
"indexBounds" : {
},
"allPlans" : [
{
"cursor" : "BasicCursor",
"n" : 1,
"nscannedObjects" : 1,
"nscanned" : 1,
"indexBounds" : {
}
}
],
"server" : "ubuntu:27017"
}
},
{
"$group" : {
"_id" : "$log_type"
}
}
],
"ok" : 1
}
This is the result from 2.6:
> use gthtg
switched to db gthtg
> db.t.insert({log_type:"fdf"})
WriteResult({ "nInserted" : 1 })
> db.t.ensureIndex({log_type: 1})
{ "numIndexesBefore" : 2, "note" : "all indexes already exist", "ok" : 1 }
> db.t.runCommand("aggregate", {pipeline: [{$group:{_id:'$log_type'}}], explain: true})
{
"stages" : [
{
"$cursor" : {
"query" : {
},
"fields" : {
"log_type" : 1,
"_id" : 0
},
"plan" : {
"cursor" : "BasicCursor",
"isMultiKey" : false,
"scanAndOrder" : false,
"allPlans" : [
{
"cursor" : "BasicCursor",
"isMultiKey" : false,
"scanAndOrder" : false
}
]
}
}
},
{
"$group" : {
"_id" : "$log_type"
}
}
],
"ok" : 1
}

MongoDB - Index Intersection with two multikey indexes

I have two arrays in my collection (one is an embedded document and the other one is just a simple collection of strings). A document for example:
{
"_id" : ObjectId("534fb7b4f9591329d5ea3d0c"),
"_class" : "discussion",
"title" : "A",
"owner" : "1",
"tags" : ["tag-1", "tag-2", "tag-3"],
"creation_time" : ISODate("2014-04-17T11:14:59.777Z"),
"modification_time" : ISODate("2014-04-17T11:14:59.777Z"),
"policies" : [
{
"participant_id" : "2",
"action" : "CREATE"
}, {
"participant_id" : "1",
"action" : "READ"
}
]
}
Since some of the queries will include only the policies and some will include the tags and the participants arrays, and considering the fact that I can't create an multikey indexe with two arrays, I thought that it will be a classic scenario to use the Index Intersection.
I'm executing a query , but I can't see the intersection kicks in.
Here are the indexes:
db.discussion.getIndexes()
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "test-fw.discussion"
},
{
"v" : 1,
"key" : {
"tags" : 1,
"creation_time" : 1
},
"name" : "tags",
"ns" : "test-fw.discussion",
"dropDups" : false,
"background" : false
},
{
"v" : 1,
"key" : {
"policies.participant_id" : 1,
"policies.action" : 1
},
"name" : "policies",
"ns" : "test-fw.discussion"
}
Here is the query:
db.discussion.find({
"$and" : [
{ "tags" : { "$in" : [ "tag-1" , "tag-2" , "tag-3"] }},
{ "policies" : { "$elemMatch" : {
"$and" : [
{ "participant_id" : { "$in" : [
"participant-1",
"participant-2",
"participant-3"
]}},
{ "action" : "READ"}
]
}}}
]
})
.limit(20000).sort({ "creation_time" : 1 }).explain();
And here is the result of the explain:
"clauses" : [
{
"cursor" : "BtreeCursor tags",
"isMultiKey" : true,
"n" : 10000,
"nscannedObjects" : 10000,
"nscanned" : 10000,
"scanAndOrder" : false,
"indexOnly" : false,
"nChunkSkips" : 0,
"indexBounds" : {
"tags" : [
[
"tag-1",
"tag-1"
]
],
"creation_time" : [
[
{
"$minElement" : 1
},
{
"$maxElement" : 1
}
]
]
}
},
{
"cursor" : "BtreeCursor tags",
"isMultiKey" : true,
"n" : 10000,
"nscannedObjects" : 10000,
"nscanned" : 10000,
"scanAndOrder" : false,
"indexOnly" : false,
"nChunkSkips" : 0,
"indexBounds" : {
"tags" : [
[
"tag-2",
"tag-2"
]
],
"creation_time" : [
[
{
"$minElement" : 1
},
{
"$maxElement" : 1
}
]
]
}
},
{
"cursor" : "BtreeCursor tags",
"isMultiKey" : true,
"n" : 10000,
"nscannedObjects" : 10000,
"nscanned" : 10000,
"scanAndOrder" : false,
"indexOnly" : false,
"nChunkSkips" : 0,
"indexBounds" : {
"tags" : [
[
"tag-3",
"tag-3"
]
],
"creation_time" : [
[
{
"$minElement" : 1
},
{
"$maxElement" : 1
}
]
]
}
}
],
"cursor" : "QueryOptimizerCursor",
"n" : 20000,
"nscannedObjects" : 30000,
"nscanned" : 30000,
"nscannedObjectsAllPlans" : 30203,
"nscannedAllPlans" : 30409,
"scanAndOrder" : false,
"nYields" : 471,
"nChunkSkips" : 0,
"millis" : 165,
"server" : "User-PC:27017",
"filterSet" : false
Each of the tags in the query (tag1, tag-2 and tag-3 ) have 10K documents.
Each of the policies ({participant-1,READ},{participant-2,READ},{participant-3,READ}) have 10K documents.
The AND operator results with 20K documents.
As I said earlier, I can't see why the intersection of the two indexes (I mean the policies and the tags indexes), doesn't kick in.
Can someone please shade some light on the thing that I'm missing?
There are two things that are actually important to your understanding of this.
The first point is that the query optimizer can only use one index when resolving the query plan and cannot use both of the indexes you have specified. As such it picks the one that is the best "fit" by it's own determination, unless you explicitly specify this with a hint. Intersection somewhat suits, but now for the next point:
The second point is documented in the limitations of compound indexes. This actually points out that even if you were to "try" to create a compound index that included both of the array fields you want, then you could not. The problem here is that as an array this introduces too many possibilities for the bounds keys, and a multi-key index already introduces a fair level of complexity when used in compound with a standard field.
The limitations on combining the two multi-key indexes is the main problem here, much as it is on creation, the complexity of "combining" the two produces two many permutations to make it a viable option.
It might just be the case that the policies index is actually going to be the better one to use for this type of search, and you could probably amend this by specifying that field in the query first:
db.discussion.find({
{
"policies" : { "$elemMatch" : {
"participant_id" : { "$in" : [
"participant-1",
"participant-2",
"participant-3"
]},
"action" : "READ"
}},
"tags" : { "$in" : [ "tag-1" , "tag-2" , "tag-3"] }
}
)
That is if that will select the smaller range of data, which it probably does. Otherwise use the hint modifier as mentioned earlier.
If that does not actually directly help results, it might be worth re-considering the schema to something that would not involve having those values in array fields or some other type of "meta" field that could be easily looked up with an index.
Also note in the edited form that all the wrapping $and statements should not be required as "and" is implicit in MongoDB queries. As a modifier it is only required if you want two different conditions on the same field.
After doing a little testing, I believe Mongo can, in fact, use two multikey indexes in an intersection. I created a collection with the following structure:
{
"_id" : ObjectId("54e129c90ab3dc0006000001"),
"bar" : [
"hgcmdflitt",
...
"nuzjqxmzot"
],
"foo" : [
"bxzvqzlpwy",
...
"xcwrwluxbd"
]
}
I created indexes on foo and bar and then ran the following query. Note the "true" passed in to explain. This enables verbose mode.
db.col.find({"bar":"hgcmdflitt", "foo":"bxzvqzlpwy"}).explain(true)
In the verbose results, you can find the "allPlans" section of the response, which will show you all of the query plans mongo considered.
"allPlans" : [
{
"cursor" : "BtreeCursor bar_1",
...
},
{
"cursor" : "BtreeCursor foo_1",
...
},
{
"cursor" : "Complex Plan"
...
}
]
If you see a plan with "cursor" : "Complex Plan" that means mongo considered using an index intersection. To find the reasons why mongo might not have decided to actually use that query plan, see this answer: Why doesn't MongoDB use index intersection?

How to properly index MongoDB queries with multiple $and and $or statements

I have a collection in MongoDB (app_logins) that hold documents with the following structure:
{
"_id" : "c8535f1bd2404589be419d0123a569de"
"app" : "MyAppName",
"start" : ISODate("2014-02-26T14:00:03.754Z"),
"end" : ISODate("2014-02-26T15:11:45.558Z")
}
Since the documentation says that the queries in an $or can be executed in parallel and can use separate indices, and I assume the same holds true for $and, I added the following indices:
db.app_logins.ensureIndex({app:1})
db.app_logins.ensureIndex({start:1})
db.app_logins.ensureIndex({end:1})
But when I do a query like this, way too many documents are scanned:
db.app_logins.find(
{
$and:[
{ app : "MyAppName" },
{
$or:[
{
$and:[
{ start : { $gte:new Date(1393425621000) }},
{ start : { $lte:new Date(1393425639875) }}
]
},
{
$and:[
{ end : { $gte:new Date(1393425621000) }},
{ end : { $lte:new Date(1393425639875) }}
]
},
{
$and:[
{ start : { $lte:new Date(1393425639875) }},
{ end : { $gte:new Date(1393425621000) }}
]
}
]
}
]
}
).explain()
{
"cursor" : "BtreeCursor app_1",
"isMultiKey" : true,
"n" : 138,
"nscannedObjects" : 10716598,
"nscanned" : 10716598,
"nscannedObjectsAllPlans" : 10716598,
"nscannedAllPlans" : 10716598,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 30658,
"nChunkSkips" : 0,
"millis" : 38330,
"indexBounds" : {
"app" : [
[
"MyAppName",
"MyAppName"
]
]
},
"server" : "127.0.0.1:27017"
}
I know that this can be caused because 10716598 match the 'app' field, but the other query can return a much smaller subset.
Is there any way I can optimize this? The aggregation framework comes to mind, but I was thinking that there may be a better way to optimize this, possibly using indexes.
Edit:
Looks like if I add an index on app-start-end, as Josh suggested, I am getting better results. I am not sure if I can optimize this further this way, but the results are much better:
{
"cursor" : "BtreeCursor app_1_start_1_end_1",
"isMultiKey" : false,
"n" : 138,
"nscannedObjects" : 138,
"nscanned" : 8279154,
"nscannedObjectsAllPlans" : 138,
"nscannedAllPlans" : 8279154,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 2934,
"nChunkSkips" : 0,
"millis" : 13539,
"indexBounds" : {
"app" : [
[
"MyAppName",
"MyAppName"
]
],
"start" : [
[
{
"$minElement" : 1
},
{
"$maxElement" : 1
}
]
],
"end" : [
[
{
"$minElement" : 1
},
{
"$maxElement" : 1
}
]
]
},
"server" : "127.0.0.1:27017"
}
You can use a compound index to further improve performance.
Try using .ensureIndex({app:1, start:1, end:1})
This will allow mongo to match on app using an index, and then within the documents that matched on app, it will match on start also using an index. Likewise, for the documents that matched on start within the documents it matched on app, it will match on end using an index.
I doubt $and is executed in parallel. I haven't seen any documentation suggest so either. It just logically doesn't make sense as $and needs both to be present. Opposed to $or, only 1 needs to exist.
Your example only uses "start" & "end" without "app". I would drop "app" in the complex index which should reduce the index size. It will reduce the chance of RAM swapping if your database grows too big.
If searching for "app" is separate from "start" & "end", then have a separate simple index on "app" only, plus the complex index of "start" & "end" will be more efficient.

Why are any objects being scanned here?

I have an index:
{indices.textLc:1, group:1, lc:1, wordCount:1, pattern:1, clExists:1}
and Morphia generates queries like:
{
$and: [{
lc: "eng"
},
{
$or: [{
group: "cn"
},
{
group: "all"
}]
},
{
"indices.textLc": {
$in: ["media strengthening", "strengthening", "media"]
}
},
{
wordCount: {
$gte: 1
}
},
{
wordCount: {
$lte: 2
}
}]
}
and explain gives:
{
"cursor" : "BtreeCursor indices.textLc_1_group_1_lc_1_wordCount_1_pattern_1_clExists_1 multi",
"nscanned" : 20287,
"nscannedObjects" : 20272,
"n" : 22,
"millis" : 677,
"nYields" : 0,
"nChunkSkips" : 0,
"isMultiKey" : true,
"indexOnly" : false,
"indexBounds" : {
"indices.textLc" : [
[
"media",
"media"
],
[
"media strengthening",
"media strengthening"
],
[
"strengthening",
"strengthening"
]
],
"group" : [
[
{
"$minElement" : 1
},
{
"$maxElement" : 1
}
]
],
"lc" : [
[
"eng",
"eng"
]
],
"wordCount" : [
[
1,
1.7976931348623157e+308
]
],
"pattern" : [
[
{
"$minElement" : 1
},
{
"$maxElement" : 1
}
]
],
"clExists" : [
[
{
"$minElement" : 1
},
{
"$maxElement" : 1
}
]
]
}
Firstly, I don't understand why any scanning is required since everything is available in the index. More specifically, why does the wordCount part of the indexBounds not look like:
"wordCount" : [
[
1,
2
]
],
Update 2012-03-20: If it helps explain anything, I'm running MongoDB 2.0.3
Every field in your query being available in your compound index says very little about whether or not it can use your one index for every clause in your query. There are a few things to consider :
With the exception of top-level $or clauses which can use an index per clause every MongoDB query can use at most one index.
Compound indexes only work if each subsequent field in the compound can be used in order, meaning your query allows for filtering on the first index field first, the second next and so on. SO if you have an index {a:1, b:1} a query {b:"Hi!"} would not use the index even though the field is in the compound index.
Now, the reason your query requires a scan is because your index can only optimize the query execution plan for the "indices.textLc" field (your first index field) and in this particular case "lc" because it's a seperate clause in your $and.
The "wordCount" part of the explain should actually read :
"wordCount" : [
[
1,
2
]
]
I just tested it and it does on my machine so I think something's going wrong with your Morphia/mapping solution there.
Compound indexes and complicated queries such as yours are a tricky subject. I don't have time now to look at your query and index and see if it can be optimized. I'll revisit tonight and help you out if I can.