"netIn" in mongostat output - mongodb

I have wrote a test script which did millions of updates(using update query) in a collection. Following is the mongostat output
insert query update delete getmore command flushes mapped vsize res faults locked % idx miss % qr|qw ar|aw netIn netOut conn time
0 0 21156 0 0 1 0 208m 2.45g 119m 0 81.7 0 0|8 0|9 2m 1k 10 12:52:11
0 0 20620 0 0 1 0 208m 2.45g 119m 0 82.5 0 0|8 0|9 1m 1k 10 12:52:12
0 0 21915 0 0 1 0 208m 2.45g 119m 0 81.9 0 0|8 0|9 2m 1k 10 12:52:13
0 0 21634 0 0 1 0 208m 2.45g 119m 0 82.1 0 0|8 0|9 2m 1k 10 12:52:15
0 0 19793 0 0 1 0 208m 2.45g 119m 0 81.8 0 0|8 0|9 1m 1k 10 12:52:16
0 0 22062 0 0 1 0 208m 2.45g 119m 0 81.9 0 0|8 0|8 2m 1k 10 12:52:17
0 0 23395 0 0 1 0 208m 2.45g 119m 0 81.9 0 0|8 0|8 2m 1k 10 12:52:19
The netIn says the total network in bytes per second, i hope. Is there any way to increase the size of netIn to some mb, so that i can increase the update statement per second.

I don't think you understand the netIn statistic. It isn't some limit but the actual amount of data received by MongoDB per interval sample. In other words, the netIn value will increase if you (can) do more updates.
Increasing update throughput itself may be possible but is very application specific.

Related

Is there a difference between WKB and the hex value returned in PostGIS

I've been doing some experimenting with PostGIS, and here's what I've noticed:
Suppose I have a table defined as follows:
CREATE TABLE IF NOT EXISTS geomtest (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
geom geometry(POLYGON, 4326) NOT NULL
);
And I add the following polygon:
SRID=4326;POLYGON((0 0,0 10,10 10,10 0,0 0))
If I query the geom column on its own, I get a Hex representation of the geometry. If I instead call ST_AsBinary(geom), I get a binary representation.
However, when I convert both the hex and binary representations to an array of bytes, the results I get are slightly different. The first comment is the result I get by converting the hex into binary, and the next is straight from ST_AsBinary()
//[1 3 0 0 32 230 16 0 0 1 0 0 0 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 36 64 0 0 0 0 0 0 36 64 0 0 0 0 0 0 36 64 0 0 0 0 0 0 36 64 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
//[1 3 0 0 0 1 0 0 0 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 36 64 0 0 0 0 0 0 36 64 0 0 0 0 0 0 36 64 0 0 0 0 0 0 36 64 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
As you can see, the first 4 bytes are identical; representing whether it's in big or little endian, and the type of geometry (3, in this case a Polygon). The rest of the bytes are the same too. The only difference is there are a few extra bytes added after the first 4.
I wondered if this had to do with representing the projection (SRID=4326), but I haven't found any evidence to that.
If someone could shed some light on this, I'd greatly appreciate it.
I didn't examine the bytes, but I am sure that the difference is the SRID that's not included in the WKB format.
Try st_asewkb instead.

scbl exception Heap exhausted during garbage collection

When our application run for some time, for example , run for hours, the sbcl will throw heap exhausted exception.
Heap exhausted during garbage collection: 1968 bytes available, 2128 requested.
Gen StaPg UbSta LaSta LUbSt Boxed Unboxed LB LUB !move Alloc Waste Trig WP GCs Mem-age
0: 0 0 0 0 0 0 0 0 0 0 0 5368709 0 0 0.0000
1: 0 0 0 0 0 0 0 0 0 0 0 5368709 0 0 0.0000
2: 0 0 0 0 0 0 0 0 0 0 0 5368709 0 0 0.0000
3: 101912 101913 0 0 19362 20536 0 0 0 162867456 554752 102714709 0 1 1.4405
4: 130984 131071 0 0 29240 18868 0 0 25 191196152 5854216 128537781 14785 1 0.6442
5: 75511 81013 0 0 16567 17127 92 99 36 132974568 5818392 2000000 16565 0 0.0000
6: 0 0 0 0 7949 1232 0 0 0 37605376 0 2000000 7766 0 0.0000
Total bytes allocated = 524643552
Dynamic-space-size bytes = 536870912
GC control variables:
*GC-INHIBIT* = true
*GC-PENDING* = true
*STOP-FOR-GC-PENDING* = false
fatal error encountered in SBCL pid 3281(tid 3067845440):
Heap exhausted, game over.
Welcome to LDB, a low-level debugger for the Lisp runtime environment.
ldb>
Any suggestion?
SBCL does not allow you to allocate more than (sb-ext:dynamic-space-size) bytes on the heap. Here you have a 512MB default size (536870912 bytes) and the Lisp program already was using nearly that amount when it attempted to make another allocation.
You could double the amount of heap space available to 1024MB by starting SBCL with --dynamic-space-size 1024. However, as several comments point out, there may be a memory leak, where objects are referenced somehow proportional to the time that the system has been running, so this will offer only a temporary respite.
The (room t) standard Common Lisp function call might help debug this, if you call it periodically.
More advanced code like this http://dwim.hu/darcsweb/darcsweb.cgi?r=HEAD%20hu.dwim.debug;a=headblob;f=/source/path-to-root.lisp#l42 which delves into the SB-VM internal map of allocations could shed more light, and SBCL has a statistical profiler, http://www.sbcl.org/manual/#Statistical-Profiler that supports reporting on allocations too.

MongoDB: Increasing read speed on a single machine

I use MongoDB to store price events for stocks. Depending on what you want to screen, the number of event can rapidly grow to 1Go-2Go.
I run MongoDB on a single machine and it is taking longer and longer to load the data. I am not able to find a clear answer on the web if "sharding on a single server" is a benefit to read speed.
Is it the right path to increase the read speed?
insert query update delete getmore command flushes mapped vsize res faults locked db
0 1 395 0 1 395 0 63.9g 128g 2.53g 7484 prices:70.5%
0 0 14726 0 5 14728 0 63.9g 128g 2.48g 31555 prices:7.4%
0 0 0 0 0 1 0 63.9g 128g 2.48g 436 prices:0.0%
0 0 0 0 1 1 0 63.9g 128g 2.48g 0 prices:0.0%
0 0 0 0 0 1 1 63.9g 128g 2.49g 3877 .:83.9%
0 0 0 0 0 1 0 63.9g 128g 2.49g 0 prices:0.0%

MongoDB statistics

I'm running a MongoDB instance using a Replica Set, when there are a lot of insert, I can see very weird statistics on faults and locked %.
How come locked % can be more than 100 ?!
Where does the faults happen, I have no logs mentioning any fault, does someone have any clue about what it means ?
insert query update delete getmore command flushes mapped vsize res faults locked % idx miss % qr|qw ar|aw netIn netOut conn set repl time
9 0 0 0 1 4 0 70.3g 141g 4.77g 20 124 0 0|0 0|1 1m 2m 10 socialdb M 18:49:49
18 0 0 0 3 1 0 70.3g 141g 4.77g 17 73.8 0 0|0 0|1 1m 2m 10 socialdb M 18:49:50
21 0 0 0 1 5 0 70.3g 141g 4.77g 18 104 0 0|0 0|1 1m 1m 10 socialdb M 18:49:51
20 0 0 0 3 1 0 70.3g 141g 4.78g 18 98.8 0 0|0 0|1 1m 3m 10 socialdb M 18:49:52
172 0 0 0 5 4 0 70.3g 141g 4.79g 133 72.8 0 0|0 0|0 7m 12m 10 socialdb M 18:49:53
76 0 0 0 3 1 0 70.3g 141g 4.8g 114 65.1 0 0|0 0|1 6m 10m 10 socialdb M 18:49:54
54 0 0 0 4 4 1 70.3g 141g 4.81g 45 90.6 0 0|0 0|1 2m 8m 10 socialdb M 18:49:55
85 0 0 0 4 2 0 70.3g 141g 4.84g 101 98.1 0 0|0 0|1 6m 11m 10 socialdb M 18:49:56
77 0 0 0 3 4 0 70.3g 141g 4.82g 78 74.5 0 0|0 0|1 4m 9m 10 socialdb M 18:49:57
72 0 0 0 3 1 0 70.3g 141g 4.84g 111 95.7 0 0|0 0|1 6m 10m 10 socialdb M 18:49:58
Is there a better (standard) monitoring tool, free ?
Not sure about the other two but this could be the answer to your first question, if you are using v2.2:
http://docs.mongodb.org/manual/reference/mongostat/The above page mentions:
locked:
The percent of time in a global write lock.
(Changed in version 2.2: The locked db field replaces the locked % field to more appropriate data regarding the database specific locks in version 2.2)
locked db:
New in version 2.2.
The percent of time in the per-database context-specific lock. mongostat will report the database that has spent the most time since the last mongostat call with a write lock.
This value represents the amount of time the database had a database specific lock and the time that the mongod spent in the global lock. Because of this, and the sampling method, you may see some values greater than 100%.

Memcached using more than max memory

i have an installation on memcache which i want to use in my production environment but when i have ran a couple of tests it seems that memcache doesn't free up memory even after it has used up all of it allocated memory, Also i logged in and ran a flush_all command but the objects are still in the cache.
Here are outputs from some tests
memcached-tool
memcache-top v0.6 (default port: 11211, color: on, refresh: 3 seconds)
INSTANCE USAGE HIT % CONN TIME EVICT/s READ/s WRITE/s
127.0.0.1:11211 427.1% 0.0% 18 1.4ms 0.0 244 261.0K
AVERAGE: 427.1% 0.0% 18 1.4ms 0.0 244 261.0K
TOTAL: 4.3MB/ 1.0MB 18 1.4ms 0.0 244 261.0K
memcached-tool 127.0.0.1:11211 display
No Item_Size Max_age Pages Count Full? Evicted Evict_Time OOM
1 560B 4s 1 1872 yes 0 0 15488
2 704B 32s 1 559 no 0 0 0
3 880B 4s 1 1191 yes 0 0 1335
4 1.1K 9s 1 116 no 0 0 0
5 1.4K 21s 1 14 no 0 0 0
6 1.7K 4s 1 17 no 0 0 0
7 2.1K 84s 1 24 no 0 0 0
8 2.7K 130s 1 60 no 0 0 0
9 3.3K 25s 1 290 no 0 0 0
10 4.2K 9s 1 194 no 0 0 0
11 5.2K 9s 1 116 no 0 0 0
15 12.7K 816s 1 1 no 0 0 0
16 15.9K 769s 1 5 no 0 0 0
18 24.8K 786s 1 1 no 0 0 0
21 48.5K 816s 1 1 no 0 0 0
memcached-tool 127.0.0.1:11211 stats
127.0.0.1:11211 Field Value
accepting_conns 1
auth_cmds 0
auth_errors 0
bytes 4478060
bytes_read 23964596
bytes_written 546642860
cas_badval 0
cas_hits 0
cas_misses 0
cmd_flush 0
cmd_get 240894
cmd_set 4504
conn_yields 0
connection_structures 21
curr_connections 18
curr_items 4461
decr_hits 0
decr_misses 0
delete_hits 0
delete_misses 0
evictions 0
get_hits 43756
get_misses 197138
incr_hits 0
incr_misses 0
limit_maxbytes 1048576
listen_disabled_num 0
pid 8731
pointer_size 64
reclaimed 0
rusage_system 5.047232
rusage_user 4.311344
threads 4
time 1306247929
total_connections 3092
total_items 4504
uptime 1240
version 1.4.5
-m tells memcached how much RAM to use for item storage (in megabytes). Note
carefully that this isn't a global
memory limit, so memcached will use a
few % more memory than you tell it to.
Set this to safe values. Setting it to
less than 48 megabytes does not work
properly in 1.4.x and earlier. It will
still use the memory.
Source: https://github.com/memcached/memcached/wiki/ConfiguringServer#commandline-arguments