Using mongodb, I know that I can use the command
db.serverStatus()
Which will return a lot of information about the current mongo instance, including memory information:
"mem" : {
"bits" : 64,
"resident" : 4303,
"virtual" : 7390,
...
}
Is there anything similar, or anything in this output that I may be missing, that will also report CPU usage details?
i.e.
"cpu" : {
"usr" : 32,
"wa" : 16,
"id" : 52
}
You could try top command and check if the output gives you necessary information. Switch to admin database and issue:
db.runCommand( { top: 1 } )
{
"totals" : {
"note" : "all times in microseconds",
"Orders.orders" : {
"total" : {
"time" : 107211,
"count" : 56406
},
"readLock" : {
"time" : 107205,
"count" : 56405
},
"writeLock" : {
"time" : 6,
"count" : 1
},
"queries" : {
"time" : 105,
"count" : 1
},
"getmore" : {
"time" : 0,
"count" : 0
},
"insert" : {
"time" : 0,
"count" : 0
},
"update" : {
"time" : 0,
"count" : 0
},
"remove" : {
"time" : 0,
"count" : 0
},
"commands" : {
"time" : 0,
"count" : 0
}
},.... rest clipped as it gives per db stats
Related
I am getting confused with a simple MongoDB query that I can't figure it out where the problem is. I have a collection like this:
{
"_id" : NumberLong(1939026454),
"username" : "5144269288",
"_type" : 1,
"group_id" : 416,
"user_id" : NumberLong(426661),
"credit_used" : 0.0,
"retry_count" : 1,
"successful" : true,
"type_details" : {
"in_bytes" : 0,
"sub_service_qos" : "",
"sub_service_name" : "rating-group-103",
"out_bytes" : 0,
"sub_service_charging" : "FreeIPTV",
"remote_ip" : ""
},
"logout_time" : ISODate("2017-11-06T07:16:09.000Z"),
"before_credit" : 4560.2962,
"ras_id" : 18,
"caller_id" : "",
"isp_id" : 0,
"duration" : NumberLong(14500),
"details" : {
"connect_info" : "rate-group=103",
"sub_service" : "rating-group-103",
"diameter_request_type" : "initial"
},
"unique_id_value" : "918098048;falcon;Dehghan01000000001415716f9a113697;529;falcon-02b4e8a7__103",
"charge_rule_details" : [
{
"start_in_bytes" : 0,
"charge_rule_id" : 3682,
"start_out_bytes" : 0,
"stop_time" : ISODate("2017-11-06T07:16:09.000Z"),
"stop_out_bytes" : 0,
"start_time" : ISODate("2017-11-06T03:14:29.000Z"),
"charge_rule_desc" : "rating-group-103",
"stop_in_bytes" : 0
}
],
"unique_id" : "acct_session_id",
"login_time" : ISODate("2017-11-06T03:14:29.000Z")
}
I need to filter documents that login_time is between two given dates and type_details.sub_service_name is some string.
I tried this:
db.getCollection('connection_log_partial_data').find({
"type_details" : {"sub_service_name": "rating-group-103"},
"login_time": {
"$gt": ISODate("2016-11-06T03:14:29.000Z"),
"$lt": ISODate("2017-11-06T03:14:29.000Z")
}
});
but it fetches 0 records. Any suggestions?
your query should be something like:
db.getCollection('connection_log_partial_data').find({
"type_details.sub_service_name" : "rating-group-103",
"login_time": {
"$gte": ISODate("2016-11-06T03:14:29.000Z"),
"$lte": ISODate("2017-11-06T03:14:29.000Z")
}
});
I'm creating a shared cluster following the official tutorial using three configuration servers, three server in the replica set and using a mongos client, but when i try to create a collection with
db.createCollection("XYZ")
I get
/* 1 */
{
"ok" : 0.0,
"errmsg" : "can't create user databases on a --configsvr instance",
"code" : 14037,
"codeName" : "Location14037"
}
My server status is
/* 1 */
{
"host" : "mongo-1",
"version" : "3.4.1",
"process" : "mongos",
"pid" : NumberLong(1),
"uptime" : 16325.0,
"uptimeMillis" : NumberLong(16324905),
"uptimeEstimate" : NumberLong(16324),
"localTime" : ISODate("2017-01-26T02:04:32.110Z"),
"asserts" : {
"regular" : 0,
"warning" : 0,
"msg" : 0,
"user" : 0,
"rollovers" : 0
},
"connections" : {
"current" : 4,
"available" : 419426,
"totalCreated" : 23
},
"extra_info" : {
"note" : "fields vary by platform",
"page_faults" : 0
},
"network" : {
"bytesIn" : NumberLong(70779),
"bytesOut" : NumberLong(106181),
"physicalBytesIn" : NumberLong(70779),
"physicalBytesOut" : NumberLong(106181),
"numRequests" : NumberLong(1865)
},
"opcounters" : {
"insert" : 0,
"query" : 54,
"update" : 0,
"delete" : 0,
"getmore" : 0,
"command" : 864
},
"sharding" : {
"configsvrConnectionString" : "production/10.7.0.28:27019,10.7.0.29:27019,10.7.0.30:27019",
"lastSeenConfigServerOpTime" : {
"ts" : Timestamp(6379728405545353, 1),
"t" : NumberLong(2)
}
},
"tcmalloc" : {
"generic" : {
"current_allocated_bytes" : 2719976,
"heap_size" : 6291456
},
"tcmalloc" : {
"pageheap_free_bytes" : 167936,
"pageheap_unmapped_bytes" : 0,
"max_total_thread_cache_bytes" : 1045430272,
"current_total_thread_cache_bytes" : 777824,
"total_free_bytes" : 3403544,
"central_cache_free_bytes" : 194040,
"transfer_cache_free_bytes" : 2431680,
"thread_cache_free_bytes" : 777824,
"aggressive_memory_decommit" : 0,
"formattedString" : "------------------------------------------------\nMALLOC: 2719976 ( 2.6 MiB) Bytes in use by application\nMALLOC: + 167936 ( 0.2 MiB) Bytes in page heap freelist\nMALLOC: + 194040 ( 0.2 MiB) Bytes in central cache freelist\nMALLOC: + 2431680 ( 2.3 MiB) Bytes in transfer cache freelist\nMALLOC: + 777824 ( 0.7 MiB) Bytes in thread cache freelists\nMALLOC: + 1171648 ( 1.1 MiB) Bytes in malloc metadata\nMALLOC: ------------\nMALLOC: = 7463104 ( 7.1 MiB) Actual memory used (physical + swap)\nMALLOC: + 0 ( 0.0 MiB) Bytes released to OS (aka unmapped)\nMALLOC: ------------\nMALLOC: = 7463104 ( 7.1 MiB) Virtual address space used\nMALLOC:\nMALLOC: 508 Spans in use\nMALLOC: 24 Thread heaps in use\nMALLOC: 4096 Tcmalloc page size\n------------------------------------------------\nCall ReleaseFreeMemory() to release freelist memory to the OS (via madvise()).\nBytes released to the OS take up virtual address space but no physical memory.\n"
}
},
"mem" : {
"bits" : 64,
"resident" : 28,
"virtual" : 228,
"supported" : true
},
"metrics" : {
"cursor" : {
"timedOut" : NumberLong(0),
"open" : {
"multiTarget" : NumberLong(0),
"singleTarget" : NumberLong(0),
"pinned" : NumberLong(0),
"total" : NumberLong(0)
}
},
"commands" : {
"addShard" : {
"failed" : NumberLong(0),
"total" : NumberLong(3)
},
"aggregate" : {
"failed" : NumberLong(0),
"total" : NumberLong(12)
},
"buildInfo" : {
"failed" : NumberLong(0),
"total" : NumberLong(14)
},
"create" : {
"failed" : NumberLong(9),
"total" : NumberLong(9)
},
"enableSharding" : {
"failed" : NumberLong(0),
"total" : NumberLong(1)
},
"find" : {
"failed" : NumberLong(0),
"total" : NumberLong(54)
},
"grantRolesToUser" : {
"failed" : NumberLong(7),
"total" : NumberLong(10)
},
"isMaster" : {
"failed" : NumberLong(0),
"total" : NumberLong(48)
},
"listCollections" : {
"failed" : NumberLong(0),
"total" : NumberLong(19)
},
"ping" : {
"failed" : NumberLong(0),
"total" : NumberLong(618)
},
"replSetGetStatus" : {
"failed" : NumberLong(14),
"total" : NumberLong(14)
},
"revokeRolesFromUser" : {
"failed" : NumberLong(0),
"total" : NumberLong(1)
},
"saslContinue" : {
"failed" : NumberLong(0),
"total" : NumberLong(62)
},
"saslStart" : {
"failed" : NumberLong(0),
"total" : NumberLong(31)
},
"serverStatus" : {
"failed" : NumberLong(0),
"total" : NumberLong(2)
},
"usersInfo" : {
"failed" : NumberLong(0),
"total" : NumberLong(8)
},
"whatsmyuri" : {
"failed" : NumberLong(0),
"total" : NumberLong(12)
}
}
},
"ok" : 1.0
}
And sharded status
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("58891625f1d4d70889a9787b")
}
shards:
{ "_id" : "production", "host" : "production/10.7.0.14:27018,10.7.0.16:27018,10.7.0.9:27018", "state" : 1 }
active mongoses:
"3.4.1" : 1
balancer:
Currently enabled: yes
Currently running: yes
Balancer lock taken at Wed Jan 25 2017 17:20:29 GMT-0400 (VET) by ConfigServer:Balancer
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "base", "primary" : "production", "partitioned" : true }
What am I doing wrong?
Thanks in advance.
Answering my own question and its really dummy the issue. The thing is that because config servers are a replica set the names between that replica set and shard replica set need to be unique, in my case where not and that cause the issue.
I'm new to MongoDB. I've inserted a float number into a collection. However, when I export that collection via mongoexport, the float number changes.
This is what in the database:
{ "_id" : ObjectId("56653e23a6b56616ba417bcd"), "id" : "601318", "name" : "中国平安", "buy" : [ { "time" : ISODate("2015-06-15T01:30:00Z"), "price" : 86.9, "quantity" : 1000, "value" : 87074.4 } ], "sell" : [ { "time" : ISODate("2015-07-07T01:30:00Z"), "price" : 80.88, "quantity" : 1000, "value" : 80636.76 } ] }
This is when it's exported to json:
{ "_id" : { "$oid" : "56653e23a6b56616ba417bcd" }, "id" : "601318", "name" : "中国平安", "buy" : [ { "time" : { "$date" : "2015-06-15T09:30:00.000+0800" }, "price" : 86.90000000000001, "quantity" : 1000, "value" : 87074.39999999999 } ], "sell" : [ { "time" : { "$date" : "2015-07-07T09:30:00.000+0800" }, "price" : 80.88, "quantity" : 1000, "value" : 80636.75999999999 } ] }
How to avoid this overflow?
Store the value as an integer: 8063676 (cents or whatever).
See this question.
Suppose we have a following document
{
embedded:[
{
email:"abc#abc.com",
active:true
},
{
email:"def#abc.com",
active:false
}]
}
What indexing should be used to support $elemMatch query on email and active field of embedded doc.
Update on question :-
db.foo.aggregate([{"$match":{"embedded":{"$elemMatch":{"email":"abc#abc.com","active":true}}}},{"$group":{_id:null,"total":{"$sum":1}}}],{explain:true});
on querying this i am getting following output of explain on aggregate :-
{
"stages" : [
{
"$cursor" : {
"query" : {
"embedded" : {
"$elemMatch" : {
"email" : "abc#abc.com",
"active" : true
}
}
},
"fields" : {
"_id" : 0,
"$noFieldsNeeded" : 1
},
"planError" : "InternalError No plan available to provide stats"
}
},
{
"$group" : {
"_id" : {
"$const" : null
},
"total" : {
"$sum" : {
"$const" : 1
}
}
}
}
],
"ok" : 1
}
I think mongodb internally not using index for this query.
Thanx in advance :)
Update on output of db.foo.stats()
db.foo.stats()
{
"ns" : "test.foo",
"count" : 2,
"size" : 480,
"avgObjSize" : 240,
"storageSize" : 8192,
"numExtents" : 1,
"nindexes" : 3,
"lastExtentSize" : 8192,
"paddingFactor" : 1,
"systemFlags" : 0,
"userFlags" : 1,
"totalIndexSize" : 24528,
"indexSizes" : {
"_id_" : 8176,
"embedded.email_1_embedded.active_1" : 8176,
"name_1" : 8176
},
"ok" : 1
}
db.foo.getIndexes();
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "test.foo"
},
{
"v" : 1,
"key" : {
"embedded.email" : 1,
"embedded.active" : 1
},
"name" : "embedded.email_1_embedded.active_1",
"ns" : "test.foo"
},
{
"v" : 1,
"key" : {
"name" : 1
},
"name" : "name_1",
"ns" : "test.foo"
}
]
Should you decide to stick to that data model and your queries, here's how to create indexes that match the query:
You can simply index "embedded.email", or use a compound key of embedded indexes, i.e. something like
> db.foo.ensureIndex({"embedded.email" : 1 });
- or -
> db.foo.ensureIndex({"embedded.email" : 1, "embedded.active" : 1});
Indexing boolean fields is often not too useful, since their selectivity is low.
In Mongodb v2.2 When I try to import one simple json document file like this from my .json file into an empty collection I get 13 objects imported. Here is what I'm doing.
this is the data (I've shortened the field names to protect data):
[
{
"date" : ISODate("2012-08-01T00:00:00Z"),
"start" : ISODate("2012-08-01T00:00:00Z"),
"xxx" : 1,
"yyt" : 5,
"p" : 6,
"aam" : 20,
"dame" : "denon",
"33" : 10,
"xxt" : 8,
"col" : 3,
"rr" : [
{ "name" : "Plugin 1", "count" : 1 },
{ "name" : "Plugin 2", "count" : 1 },
{ "name" : "Plugin 3", "count" : 1 }
],
"xkx" : { "y" : 0, "n" : 1 },
"r" : { "y" : 0, "n" : 1 },
"po" : { "y" : 0, "n" : 1 },
"pge" : { "posts" : 0, "pages" : 1 },
"pol" : { "y" : 0, "n" : 1 },
"lic" : { "y" : 0, "n" : 1 },
"count" : 30,
"tx" : [
{ "zone" : -7, "count" : 1 }
],
"yp" : "daily",
"ons" : [
{ "version" : "9.6.8", "count" : 1 }
],
"ions" : [
{ "version" : "10.0.3", "count" : 1 }
]
}
]
with this command:
mongoimport --db development_report --collection xxx --username xxx --password xxx --file /Users/Alex/Desktop/daily2.json --type json --jsonArray --stopOnError --journal
I get this weired response:
Mon Sep 3 12:09:12 imported 13 objects
and this 13 new documents end up in the collection instead of one:
{ "_id" : ObjectId("5044114815e24c08bcdc988e") }
{ "_id" : ObjectId("5044114815e24c08bcdc988f"), "name" : "Plugin 1", "count" : 1 }
{ "_id" : ObjectId("5044114815e24c08bcdc9890"), "name" : "Plugin 2", "count" : 1 }
{ "_id" : ObjectId("5044114815e24c08bcdc9891"), "name" : "Plugin 3", "count" : 1 }
{ "_id" : ObjectId("5044114815e24c08bcdc9892"), "y" : 0, "n" : 1 }
{ "_id" : ObjectId("5044114815e24c08bcdc9893"), "y" : 0, "n" : 1 }
{ "_id" : ObjectId("5044114815e24c08bcdc9894"), "y" : 0, "n" : 1 }
{ "_id" : ObjectId("5044114815e24c08bcdc9895"), "posts" : 0, "pages" : 1 }
{ "_id" : ObjectId("5044114815e24c08bcdc9896"), "y" : 0, "n" : 1 }
{ "_id" : ObjectId("5044114815e24c08bcdc9897"), "y" : 0, "n" : 1 }
{ "_id" : ObjectId("5044114815e24c08bcdc9898"), "zone" : -7, "count" : 1 }
{ "_id" : ObjectId("5044114815e24c08bcdc9899"), "version" : "9.6.8", "count" : 1 }
{ "_id" : ObjectId("5044114815e24c08bcdc989a"), "version" : "10.0.3", "count" : 1 }
What am I doing wrong?
The problem you are having is with the two ISODate fields you have at the start of your document.
JSON does not have any "date" type, so it does not handle ISODate fields in your document. You would need to convert these like so:
[
{
"date" : { "$date" : 1343779200000 },
"start" : { "$date" : 1343779200000 },
...
And your import will work.
The reason this comes about is because MongoDB handles more types than are available in the JSON spec. You can see more information in the documentation. There is also an open ticket to make MongoImport handle all the formats MongoDB does here and more details here
This is really frustrating; I couldn't get anywhere fast with the import tool so I used the load() function within the mongo client to load a script which inserted my records.
> load('/Users/Alex/Desktop/daily.json');
I obviously had to modify the json file to include insert commands like so:
>db.mycollection.insert(
{ DOCUMENT 1 },
...
{ DOCUMENT N }
);
This is really late, but in case it can help anyone else - you should not be passing a JSON array. Simply list 1 JSON document per line, and each line will create a separate document. The file below would insert 2 documents:
{ "date" : { "$date": 1354320000000 }, "xxx" : 1, "yyt" : 5, ... }
{ "date" : { "$date": 1354320000000 }, "xxx" : 2, "yyt" : 6, ... }