I have ArangoDB query. This query executes generally normal, but suddenly too long
WITH
DependantPerson, PersonServiceProfile, MonitoringRule, DeviceSensor, PersonDevice, Subscription, Person, Link, DeviceApplication, Contact
FOR res IN #resource_from
LET sub = (FOR e, v, p IN #min_depth..#max_depth INBOUND res Link FILTER (p.vertices[*].type ANY == "DeviceSensor") RETURN p)
RETURN {
[res]:sub[*
RETURN {
relations: CURRENT.edges[* FILTER (CURRENT.type != "Link" && CURRENT.type != "ObserveLink") RETURN CURRENT ],
resources: CURRENT.vertices[* FILTER (CURRENT.type == "DeviceSensor" && CURRENT.attributes.deviceId == #id) RETURN CURRENT ]
}
]
}
Cache is turned off
require("#arangodb/aql/cache").properties()
{
"mode" : "off",
"maxResults" : 128
}
How can i optimize this? Thanks
Related
I have a JSON date column in AgGrid with 20K + records. I am using a date comparator to sort dates. Is there a better way then below to sort such date type as i am experiencing slowness in my application while sorting and filtering for huge data set.
For Sorting
comparator: function (date1, date2) {
var date1Number = fnDateToComparableNumber(moment(date1).toDate());
var date2Number = fnDateToComparableNumber(moment(date2).toDate());
if ((date1Number === null || isNaN(date1Number)) && (date2Number === null || isNaN(date2Number))) {
return 0;
}
if (date1Number === null || isNaN(date1Number)) {
return -1;
}
if (date2Number === null || isNaN(date2Number)) {
return 1;
}
return date1Number - date2Number;
}
For Filtering
filterParams: {
//applyButton: true,
clearButton: true,
// provide comparator function
comparator: function (filterLocalDateAtMidnight, cellValue) {
// We create a Date object for comparison against the filter date
if (cellValue == '') {
return 1;
}
var cellDate = moment(cellValue).toDate();
// Now that both parameters are Date objects, we can compare
if (cellDate < filterLocalDateAtMidnight) {
return -1;
} else if (cellDate > filterLocalDateAtMidnight) {
return 1;
} else {
return 0;
}
}
},
Note:
I am using moment for date manipulations
JSON date is required for excel export. Else processCellCallback is expensive in case of big data.
I have done some hack for excel format else excel will display date as number.
We have enterprise license for Ag-Grid
I've had similar problems at least for sorting by date. Do not use moment for parsing dates. It is way to slow for the amount of data you are using. Instead use Date():
comparator: function (date1, date2) {
var date1Number = fnDateToComparableNumber(Date(date1));
var date2Number = fnDateToComparableNumber(Date(date2));
if ((date1Number === null || isNaN(date1Number)) && (date2Number === null || isNaN(date2Number))) {
return 0;
}
if (date1Number === null || isNaN(date1Number)) {
return -1;
}
if (date2Number === null || isNaN(date2Number)) {
return 1;
}
return date1Number - date2Number;
}
Maybe you'll need to parse the date manually but you are still way faster than using moment.
For sorting dates valueFormatter function helps. This worked for me and didn't slow down while sorting large datasets.
{
headerName: 'Date',
field: 'date',
valueFormatter: params => dateFormatter(params.value),
sortable: true
}
I'm performing an incremental map reduce on a 2.6 mongod instance and everything worked fine and dandy until recently.
db.runCommand({ mapreduce: "timespanaggregations",
query: {
"start" : {
$gt: previousRun,
}
},
map : function Map() {
delete this.start;
delete this.end;
delete this._id;
var key = this.user,
value = this;
delete value.user;
emit(key, value);
},
reduce : function Reduce(user, aggregationData) {
var result = {};
aggregationData.forEach(function(timespan){
Object.keys(timespan).forEach(function(field){
if (!result[field] || (field.indexOf('Last', field.length - 5) != -1)) {
result[field] = timespan[field];
} else if ((field.indexOf('Count', field.length - 5) != -1) || (field.indexOf('Sum', field.length - 5) != -1)) {
result[field] += timespan[field];
} else if (field.indexOf('Min', field.length - 5) != -1) {
result[field] = (result[field] > timespan[field]) ? timespan[field] : result[field];
} else if (field.indexOf('Max', field.length - 5) != -1) {
result[field] = (result[field] < timespan[field]) ? timespan[field] : result[field];
}
});
});
return result;
},
sort : { "user" : 1, "start" : 1 },
out : { reduce: "lifetime_agg" },
jsMode: true
});
I'm pretty sure that I'm not breaking any requirements and nothing is nowhere near the limits. But, if I don't use the query to make chunks small enough, the command does nothing at all. It simply responds with:
counts: {
input: 0,
emit: 0,
reduce: 0,
output: number of records already in lifetime_agg
}
I would have expected some sort of an error message. The whole thing still works, if I force it to run with a smaller query result.
jsMode is set to true because I was tweaking with it and the number of records is nowhere near 500,000. It behaves the same way with jsMode set to false.
In Mongo how can I find all documents that have a given key and value, regardless of where that key appears in the document's key/value hierarchy?
For example the input key roID and value 5 would match both:
{
roID: '5'
}
and
{
other: {
roID: '5'
}
}
There is no built in way to do this. You might have to scan each matched document recursively to try and locate that attribute. Not recommended. You might want to think about restructuring your data or perhaps manipulating it into a more unified format so that it will be easier (and faster) to query.
If your desired key appears in a fixed number of different locations, you could use the $or operator to scan all the possibilities.
Taking your sample documents as an example, your query would look something like this:
db.data.find( { "$or": [
{ "roID": 5 },
{ "other.roID": 5 },
{ "foo.bar.roID": 5 },
{ any other possbile locations of roID },
...
] } )
If the number of documents in collection is not so large, then it can be done by this:
db.system.js.save({_id:"keyValueExisted", value: function (key, value) {
function findme(obj) {
for (var x in obj) {
var v = obj[x];
if (x == key && v == value) {
return true;
} else if (v instanceof Object) {
if (findme(v)) return true;
}
}
return false;
}
return findme(this);
}});
var param = ['roID', '5'];
db.c.find({$where: "keyValueExisted.apply(this, " + tojsononeline(param) + ");"});
I'm currently working on a project where I'm using keyword queries against a MongoDB. If I search for things that exists in the database everything works ok, but if I search for things that don't exist, or I have a typo in my query the appilcation basically crashes.
The query is as simple as this:
var query = Query.And(Query.Matches("text", searchText)
Where searchText is what's being written into the searchbox in the UI.
To check the size of the cursor I've tried implementing this:
if ( cursor.Size() == 0)
{
MessageBox.Show("Your search did not return a match. Please search for
something else.");
return database;
}
But the system takes 10-15 minutes to evaluate that the size is 0, compared to the 0.5 seconds if the size is 1 or more.
So do anyone have any suggestions? Either a better way of checking the size of the cursor or some kind of function that makes the method time out and tell the user that no match was found?
Thanks in advance.
Update:
As requested added the explain for something that should and something that shouldn't exist
db.docs.find( {text: "a"}).explain
function (verbose) {
/* verbose=true --> include allPlans, oldPlan fields */
var n = this.clone();
n._ensureSpecial();
n._query.$explain = true;
n._limit = Math.abs(n._limit) * -1;
var e = n.next();
function cleanup(obj){
if (typeof(obj) != 'object'){
return;
}
delete obj.allPlans;
delete obj.oldPlan;
if (typeof(obj.length) == 'number'){
for (var i=0; i < obj.length; i++){
cleanup(obj[i]);
}
}
if (obj.shards){
for (var key in obj.shards){
cleanup(obj.shards[key]);
}
}
if (obj.clauses){
cleanup(obj.clauses);
}
}
if (!verbose)
cleanup(e);
return e;
}
db.docs.find( {text: "fgrgfk"}).explain
function (verbose) {
/* verbose=true --> include allPlans, oldPlan fields */
var n = this.clone();
n._ensureSpecial();
n._query.$explain = true;
n._limit = Math.abs(n._limit) * -1;
var e = n.next();
function cleanup(obj){
if (typeof(obj) != 'object'){
return;
}
delete obj.allPlans;
delete obj.oldPlan;
if (typeof(obj.length) == 'number'){
for (var i=0; i < obj.length; i++){
cleanup(obj[i]);
}
}
if (obj.shards){
for (var key in obj.shards){
cleanup(obj.shards[key]);
}
}
if (obj.clauses){
cleanup(obj.clauses);
}
}
if (!verbose)
cleanup(e);
return e;
}
Update 2: Overview of indexes:
db.docs.getIndexes()
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "tweet_database.docs",
"name" : "_id_"
}
Here's the structure part of my collection :
{
...
list: [
{ id:'00A', name:'None 1' },
{ id:'00B', name:'None 2' },
],
...
}
Which method could you advise me to retrieve the list of values in the "id" and/or "name" field with C lib please ?
It seems you are asking for the equivalent of "db.collection.distinct" with the C driver. Is that correct? If so, you can issue distinct as a db command using the mongo_run_command function:
http://api.mongodb.org/c/current/api/mongo_8h.html#a155e3de9c71f02600482f10a5805d70d
Here is a piece of code you may find useful demonstrating the implementation:
mongo conn[1];
int status = mongo_client(conn, "127.0.0.1", 27017);
if (status != MONGO_OK)
return 1;
bson b[1]; // query bson
bson_init(b);
bson_append_string(b, "distinct", "foo");
bson_append_string(b, "key", "list.id"); // or list.name
bson_finish(b);
bson bres[1]; // result bson
status = mongo_run_command(conn, "test", b, bres);
if (status == MONGO_OK){
bson_iterator i[1], sub[1];
bson_type type;
const char* val;
bson_find(i, bres, "values");
bson_iterator_subiterator(i, sub);
while ((type = bson_iterator_next(sub))) {
if (type == BSON_STRING) {
val = bson_iterator_string(sub);
printf("Value: %s\n", val);
}
}
} else {
printf("error: %i\n", status);
}
The database is "foo" and the collection, containing documents similar to yours, is "test" in the above example. The query portion of the above is equivalent to:
db.runCommand({distinct:'foo', key:'list.id'})
Hope that helps.
Jake