I'd like to visualise the RAM usage of a Memcached daemon - what is the best utility to use?
Ideally I'd like to user Perl.
Memcache reports a number of statistics such as memory used, objects stored, hits and misses. Connect to the server (probably localhost:11211) with a standard TCP socket and write "stats\n" to get back a list of statistics. See below for an example.
Look at Cacti for actually graphing the data. I've had great success with it.
> $ telnet localhost 11211
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
stats
STAT pid 75723
STAT uptime 4166691
STAT time 1236609062
STAT version 1.2.4
STAT pointer_size 32
STAT rusage_user 115.028511
STAT rusage_system 326.163351
STAT curr_items 83335
STAT total_items 1822140
STAT bytes 239997834
STAT curr_connections 48
STAT total_connections 7840
STAT connection_structures 83
STAT cmd_get 4273541
STAT cmd_set 1822140
STAT get_hits 2442609
STAT get_misses 1830932
STAT evictions 1696494
STAT bytes_read 5162992092
STAT bytes_written 7000049654
STAT limit_maxbytes 268435456
STAT threads 1
END
Take a look at Amnesia (http://github.com/benschwarz/amnesia/tree/master), it probably has something to do what you want.
if you want to build your own little tool you should check out RRDtool
Related
I have just tried to install MongoDB on a fresh Ubuntu 18 machine.
For this I went through the tutorial from the website.
Everything went fine - including starting the server with
sudo systemctl start mongod
and checking that it runs with:
sudo systemctl status mongod
Only I can't seem to start a mongo console. When I type mongo, I get the following error:
2020-07-17T13:26:48.049+0000 F - [main] Failed to mlock: Cannot allocate locked memory. For more details see: https://dochub.mongodb.org/core/cannot-allocate-locked-memory: Operation not permitted
2020-07-17T13:26:48.049+0000 F - [main] Fatal Assertion 28832 at src/mongo/base/secure_allocator.cpp 255
2020-07-17T13:26:48.049+0000 F - [main]
***aborting after fassert() failure
I checked for the suggested link but there seems to be no limitation problem as resources are not limited (as per check with ulimit). Machine has 16Gb RAM. Any idea what the problem/solution might be?
EDIT: the process limits are:
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 64000 64000 processes
Max open files 64000 64000 files
Max locked memory unlimited unlimited bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 62761 62761 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
I was getting that exact error and that linked mongodb page wasn't helpful for me either. I'm running on FreeBSD and found a useful bit of detail in a bug report for the port. It turns out a system level resource limit was the underlying problem. On FreeBSD, the key are these two sysctl settings:
sysctl vm.stats.vm.v_wire_count vm.max_wired
v_wire_count should be less than max_wired. Increasing max_wired solved the issue for me.
If you use some sort of virtualization to deploy your machine, you need to make sure that the #memlock system calls are allowed. For example, for systemd-nspawn, check this answer:https://stackoverflow.com/a/69286781/16085315
I just had this issue on my FreeBSD VM with mongodb following solved my issue as mentioned previously :
# sysctl vm.stats.vm.v_wire_count vm.max_wired
vm.stats.vm.v_wire_count: 1072281
vm.max_wired: 411615
# sysctl -w vm.max_wired=1400000
vm.max_wired: 411615 -> 1400000
# service mongod restart
Stopping mongod.
Waiting for PIDS: 36308.
Starting mongod.
Add the value for a long term set in /etc/sysctl.conf
vm.max_wired=1400000
v_wire_count should be less than max_wired
I have a server running Postgres 9.1.15. The server has 2GB of RAM and no swap. Intermittently Postgres will start getting "out of memory" errors on some SELECTs, and will continue doing so until I restart Postgres or some of the clients that are connected to it. What's weird is that when this happens, free still reports over 500MB of free memory.
select version();:
PostgreSQL 9.1.15 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit
uname -a:
Linux db 3.2.0-23-virtual #36-Ubuntu SMP Tue Apr 10 22:29:03 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
Postgresql.conf (everything else is commented out/default):
max_connections = 100
shared_buffers = 500MB
work_mem = 2000kB
maintenance_work_mem = 128MB
wal_buffers = 16MB
checkpoint_segments = 32
checkpoint_completion_target = 0.9
random_page_cost = 2.0
effective_cache_size = 1000MB
default_statistics_target = 100
log_temp_files = 0
I got these values from pgtune (I chose "mixed type of applications") and have been fiddling with them based on what I've read, without making much real progress. At the moment there's 68 connections, which is a typical number (I'm not using pgbouncer or any other connection poolers yet).
/etc/sysctl.conf:
kernel.shmmax=1050451968
kernel.shmall=256458
vm.overcommit_ratio=100
vm.overcommit_memory=2
I first changed overcommit_memory to 2 about a fortnight ago after the OOM killer killed the Postgres server. Prior to that the server had been running fine for a long time. The errors I get now are less catastrophic but much more annoying because they are much more frequent.
I haven't had much luck pinpointing the first event that causes postgres to run "out of memory" - it seems to be different each time. The most recent time it crashed, the first three lines logged were:
2015-04-07 05:32:39 UTC ERROR: out of memory
2015-04-07 05:32:39 UTC DETAIL: Failed on request of size 125.
2015-04-07 05:32:39 UTC CONTEXT: automatic analyze of table "xxx.public.delayed_jobs"
TopMemoryContext: 68688 total in 10 blocks; 4560 free (4 chunks); 64128 used
[... snipped heaps of lines which I can provide if they are useful ...]
---
2015-04-07 05:33:58 UTC ERROR: out of memory
2015-04-07 05:33:58 UTC DETAIL: Failed on request of size 16.
2015-04-07 05:33:58 UTC STATEMENT: SELECT oid, typname, typelem, typdelim, typinput FROM pg_type
2015-04-07 05:33:59 UTC LOG: could not fork new process for connection: Cannot allocate memory
2015-04-07 05:33:59 UTC LOG: could not fork new process for connection: Cannot allocate memory
2015-04-07 05:33:59 UTC LOG: could not fork new process for connection: Cannot allocate memory
TopMemoryContext: 396368 total in 50 blocks; 10160 free (28 chunks); 386208 used
[... snipped heaps of lines which I can provide if they are useful ...]
---
2015-04-07 05:33:59 UTC ERROR: out of memory
2015-04-07 05:33:59 UTC DETAIL: Failed on request of size 1840.
2015-04-07 05:33:59 UTC STATEMENT: SELECT... [nested select with 4 joins, 19 ands, and 2 order bys]
TopMemoryContext: 388176 total in 49 blocks; 17264 free (55 chunks); 370912 used
The crash before that, a few hours earlier, just had three instances of that last query as the first three lines of the crash. That query gets run very often, so I'm not sure if the issues are because of this query, or if it just comes up in the error log because it's a reasonably complex SELECT getting run all the time. That said, here's an EXPLAIN ANALYZE of it: http://explain.depesz.com/s/r00
This is what ulimit -a for the postgres user looks like:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 15956
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 15956
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
I'll try and get the exact numbers from free next time there's a crash, in the meantime this is a braindump of all the info I have.
Any ideas on where to go from here?
I just ran into this same issue with a ~2.5 GB plain-text SQL file I was trying to restore. I scaled my Digital Ocean server up to 64 GB RAM, created a 10 GB swap file, and tried again. I got an out-of-memory error with 50 GB free, and no swap in use.
I scaled back my server to the small 1 GB instance I was using (requiring a reboot) and figured I'd give it another shot for no other reason than I was frustrated. I started the import and realized I forgot to create my temporary swap file again.
I created it in the middle of the import. psql made it a lot further before crashing. It made it through 5 additional tables.
I think there must be a bug allocating memory in psql.
Can you check if there's any swap memory available when the error raises up?
I've remove completely the swap memory in my Linux desktop (just for testing other things...) and I got the exactly same error! I'm pretty sure that this is what is going on with you too.
It is a bit suspicious that you report the same free memory size as your shared_buffers size. Are you sure you are looking the right values?
Output of free command at the time of crash would be useful as well as the content of /proc/meminfo
Beware that setting overcommit_memory to 2 is not so effective if you see the overcommit_ratio to 100. It will basically limits the memory allocation to the size swap (0 in this case) + 100% of physical RAM, which doesn't take into account any space for shared memory and disk caches.
You should probably set overcommit_ratio to 50.
I started memcached as a daemon with 512MB of memory.
memcached -d -m 512
Then, I telnet'd to the box, and ran the stats command.
Why does the limit_maxbytes field equal 536870912? I would've expected 512,000,000.
STAT limit_maxbytes 536870912
This is because -m 512 means 512MB and that is 512*1024*1024.
I got two identical servers, in both is installed postgresql server version 9.0.4 with the same configuration. If I launch a .sql file that performs about 5k inserts, on the first one it takes a couple of seconds, on the second one it takes 1 minute and 30 seconds.
If I set synchronous_commit, speed dramatically reduces (as expected), and the performances of the two servers are comparable. But if I set synchronous_commit to on, on one server the insert script execution time increases of less than one second, on the other one it increases too much, as I said in the first period.
Any idea about this difference in performances? Am I missing some configuration?
Update: tried a simple disk test: time sh -c "dd if=/dev/zero of=ddfile bs=8k count=200000 && sync"
fast server output:
1638400000 bytes (1.6 GB) copied, 1.73537 seconds, 944 MB/s
real 0m32.009s
user 0m0.018s
sys 0m2.298s
slow server output:
1638400000 bytes (1.6 GB) copied, 4.85727 s, 337 MB/s
real 0m35.045s
user 0m0.019s
sys 0m2.221s
Common features (both servers):
SATA, RAID1, controller: Intel Corporation 82801JI (ICH10 Family) SATA AHCI Controller, distribution: linux centOS. mount -v output:
/dev/md2 on / type ext3 (rw)
proc on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/md1 on /boot type ext3 (rw)
fast server: kernel 2.6.18-238.9.1.el5 #1 SMP
Disk /dev/sda: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/sda1 3906 4209029 2102562 fd Linux raid autodetect
/dev/sda2 4209030 4739174 265072+ fd Linux raid autodetect
/dev/sda3 4739175 1465144064 730202445 fd Linux raid autodetect
slow server: kernel 2.6.32-71.29.1.el6.x86_64 #1 SMP
Disk /dev/sda: 750.2 GB, 750156374016 bytes
64 heads, 32 sectors/track, 715404 cylinders, total 1465149168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0006ffc4
Device Boot Start End Blocks Id System
/dev/sda1 2048 4194303 2096128 fd Linux raid autodetect
/dev/sda2 4194304 5242879 524288 fd Linux raid autodetect
/dev/sda3 5242880 1465147391 729952256 fd Linux raid autodetect
Could it be useful to address the performance issue?
I suppose your slow server with newer kernel has working barriers. This is good, as otherwise you can loose data in case of a power failure. But it is of course slower than running with write cache enabled and without barriers, aka running with scissors.
You can check if barriers are enabled using mount -v — search for barrier=1 in output. You can disable barriers for your filesystem (mount -o remount,barrier=0 /) to speed up, but then you risk data corruption.
Try to do your 5k inserts in one transaction — Postgres won't have to write to disk on every row inserted. The theoretical limit for number of transactions per second wound be comparable to disk rotational speed (7200rpm disk ≈ 7200/60 tps = 120 tps) as a disk can only write to a sector once per rotation.
To me this sounds like in the "fast" server there is a write cache enbled for the harddisk(s), whereas in the slow server the harddisk(s) are really writing the data when PG writes it (by calling fsync)
I am trying to configure magento session management with memcached server. I have install memcached and also its client and configure my magento local.xml file under etc folder as below. My memcached server is listening on 11211 default port.
magento store front is working good with memcached correctly.
I am curious to find out the statistics of memcached server. How may cache session miss/hit happen on server and other statistcs too. I have used the following command to see it
$ echo "stats settings" | nc localhost 11211
STAT maxbytes 67108864
STAT maxconns 1024
STAT tcpport 11211
STAT udpport 11211
STAT inter 127.0.0.1
STAT verbosity 0
STAT oldest 0
STAT evictions on
STAT domain_socket NULL
STAT umask 700
STAT growth_factor 1.25
STAT chunk_size 48
STAT num_threads 4
STAT stat_key_prefix :
STAT detail_enabled no
STAT reqs_per_event 20
STAT cas_enabled yes
STAT tcp_backlog 1024
STAT binding_protocol auto-negotiate
STAT auth_enabled_sasl no
STAT item_size_max 1048576
END
Can any one help me to find out what commands or procedure i 'll use to see my memcached daemon cache miss/hit.
There's a really nice tool that will show you all the stats you need :
http://code.google.com/p/phpmemcacheadmin/
Really easy to install and configure.