Memcached `stats` Command - memcached

I started memcached as a daemon with 512MB of memory.
memcached -d -m 512
Then, I telnet'd to the box, and ran the stats command.
Why does the limit_maxbytes field equal 536870912? I would've expected 512,000,000.
STAT limit_maxbytes 536870912

This is because -m 512 means 512MB and that is 512*1024*1024.

Related

Docker disk space issue with space left on host

I have PostgreSQL running in a Docker container (Docker 17.09.0-ce-mac35 on OS X 10.11.6) and I'm inserting data from a Python application on the host. After a while I consistently get the following error in Python while there is still plenty of disk space available on the host:
psycopg2.OperationalError: could not extend file "base/16385/24599.49": wrote only 4096 of 8192 bytes at block 6543502
HINT: Check free disk space.
This is my docker-compose.yml:
version: "2"
services:
rabbitmq:
container_name: rabbitmq
build: ../messaging/
ports:
- "4369:4369"
- "5672:5672"
- "25672:25672"
- "15672:15672"
- "5671:5671"
database:
container_name: database
build: ../database/
ports:
- "5432:5432"
The database Dockerfile looks like this:
FROM ubuntu:17.04
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ zesty-pgdg main" > /etc/apt/sources.list.d/pgdg.list
RUN apt-get update && apt-get install -y --allow-unauthenticated python-software-properties software-properties-common postgresql-10 postgresql-client-10 postgresql-contrib-10
USER postgres
RUN /etc/init.d/postgresql start &&\
psql --command "CREATE USER ****** WITH SUPERUSER PASSWORD '******';" &&\
createdb -O ****** ******
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/10/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/10/main/postgresql.conf
EXPOSE 5432
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
CMD ["/usr/lib/postgresql/10/bin/postgres", "-D", "/var/lib/postgresql/10/main", "-c", "config_file=/etc/postgresql/10/main/postgresql.conf"]
df -k output:
Filesystem 1024-blocks Used Available Capacity iused ifree %iused Mounted on
/dev/disk2 1088358016 414085004 674017012 39% 103585249 168504253 38% /
devfs 190 190 0 100% 658 0 100% /dev
map -hosts 0 0 0 100% 0 0 100% /net
map auto_home 0 0 0 100% 0 0 100% /home
Update 1:
It seems like the container has now shut down. I'll start over and try to df -k in the container before it shuts down.
2017-11-14 14:48:25.117 UTC [18] HINT: In a moment you should be able to reconnect to the database and repeat your command.
2017-11-14 14:48:25.120 UTC [17] WARNING: terminating connection because of crash of another server process
2017-11-14 14:48:25.120 UTC [17] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2017-11-14 14:48:25.120 UTC [17] HINT: In a moment you should be able to reconnect to the database and repeat your command.
2017-11-14 14:48:25.132 UTC [1] LOG: all server processes terminated; reinitializing
2017-11-14 14:48:25.175 UTC [1] FATAL: could not access status of transaction 0
2017-11-14 14:48:25.175 UTC [1] DETAIL: Could not write to file "pg_notify/0000" at offset 0: No space left on device.
2017-11-14 14:48:25.181 UTC [1] LOG: database system is shut down
Update 2:
This is df -k on the container, /dev/vda2 seems to be filling up quickly:
$ docker exec -it database df -k
Filesystem 1K-blocks Used Available Use% Mounted on
none 61890340 15022448 43700968 26% /
tmpfs 65536 0 65536 0% /dev
tmpfs 1023516 0 1023516 0% /sys/fs/cgroup
/dev/vda2 61890340 15022448 43700968 26% /etc/postgresql
shm 65536 8 65528 1% /dev/shm
tmpfs 1023516 0 1023516 0% /sys/firmware
Update 3:
This seems to be related to the ~64 GB file size limit on Docker.qcow2. Solved using qemu and gparted as follows:
cd ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/
qemu-img info Docker.qcow2
qemu-img resize Docker.qcow2 +200G
qemu-img info Docker.qcow2
qemu-system-x86_64 -drive file=Docker.qcow2 -m 512 -cdrom ~/Downloads/gparted-live-0.30.0-1-i686.iso -boot d -device usb-mouse -usb

MongoDB Install Simple Test Ops Manager java.lang.OutOfMemoryError on startup

I just installed a test evaluation of MongoDB Ops Manager and get an error on startup of the Backup HTTP server:
Migrate MMS data
Running migrations...[ OK ]
Start MMS server
Instance 0 starting..........[ OK ]
Start Backup HTTP Server
Instance 0 starting.......[FAILED]
2015-05-07T14:00:32.107+0000 [main] gid ERROR ServerMain:199 - Cannot start bslurp server [FATAL-EXITING] - instance: 0 - msg: unable to create new native thread
java.lang.OutOfMemoryError: unable to create new native thread
I appear to have plenty of memory
[root#krh60621 ~]# free -m
total used free shared buffers cached
Mem: 15951 4588 11362 0 364 2021
and I upped the max processes to unlimited to see if that would help....
[root#krh60621 ~]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 127421
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 94000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
[root#krh60621 ~]# ps -eLF| grep -c java
593
[root#krh60621 ~]# ps -eLF| wc -l
1031
Any thoughts???
I encountered a similar issue in our Test Ops Manager deployment when we upgraded to Ops Manager 1.8.0. I ultimately opened up a ticket with MongoDB Support and this was the resolution for our issue:
The Ops Manager components are launched using the default username "mongodb-mms". Please adjust the ulimit settings for this user to match those of the "mongodb" user, currently defined in /etc/security/limits.d/99-mongodb-mms-automation-agent.conf.
You may wish to add a separate file under /etc/security/limits.d/ for the mongodb-mms user.
More information can be found here.

Two postgresql server with same configuration, different performance

I got two identical servers, in both is installed postgresql server version 9.0.4 with the same configuration. If I launch a .sql file that performs about 5k inserts, on the first one it takes a couple of seconds, on the second one it takes 1 minute and 30 seconds.
If I set synchronous_commit, speed dramatically reduces (as expected), and the performances of the two servers are comparable. But if I set synchronous_commit to on, on one server the insert script execution time increases of less than one second, on the other one it increases too much, as I said in the first period.
Any idea about this difference in performances? Am I missing some configuration?
Update: tried a simple disk test: time sh -c "dd if=/dev/zero of=ddfile bs=8k count=200000 && sync"
fast server output:
1638400000 bytes (1.6 GB) copied, 1.73537 seconds, 944 MB/s
real 0m32.009s
user 0m0.018s
sys 0m2.298s
slow server output:
1638400000 bytes (1.6 GB) copied, 4.85727 s, 337 MB/s
real 0m35.045s
user 0m0.019s
sys 0m2.221s
Common features (both servers):
SATA, RAID1, controller: Intel Corporation 82801JI (ICH10 Family) SATA AHCI Controller, distribution: linux centOS. mount -v output:
/dev/md2 on / type ext3 (rw)
proc on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/md1 on /boot type ext3 (rw)
fast server: kernel 2.6.18-238.9.1.el5 #1 SMP
Disk /dev/sda: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/sda1 3906 4209029 2102562 fd Linux raid autodetect
/dev/sda2 4209030 4739174 265072+ fd Linux raid autodetect
/dev/sda3 4739175 1465144064 730202445 fd Linux raid autodetect
slow server: kernel 2.6.32-71.29.1.el6.x86_64 #1 SMP
Disk /dev/sda: 750.2 GB, 750156374016 bytes
64 heads, 32 sectors/track, 715404 cylinders, total 1465149168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0006ffc4
Device Boot Start End Blocks Id System
/dev/sda1 2048 4194303 2096128 fd Linux raid autodetect
/dev/sda2 4194304 5242879 524288 fd Linux raid autodetect
/dev/sda3 5242880 1465147391 729952256 fd Linux raid autodetect
Could it be useful to address the performance issue?
I suppose your slow server with newer kernel has working barriers. This is good, as otherwise you can loose data in case of a power failure. But it is of course slower than running with write cache enabled and without barriers, aka running with scissors.
You can check if barriers are enabled using mount -v — search for barrier=1 in output. You can disable barriers for your filesystem (mount -o remount,barrier=0 /) to speed up, but then you risk data corruption.
Try to do your 5k inserts in one transaction — Postgres won't have to write to disk on every row inserted. The theoretical limit for number of transactions per second wound be comparable to disk rotational speed (7200rpm disk ≈ 7200/60 tps = 120 tps) as a disk can only write to a sector once per rotation.
To me this sounds like in the "fast" server there is a write cache enbled for the harddisk(s), whereas in the slow server the harddisk(s) are really writing the data when PG writes it (by calling fsync)

memcached session server cache miss/ hit count commnad

I am trying to configure magento session management with memcached server. I have install memcached and also its client and configure my magento local.xml file under etc folder as below. My memcached server is listening on 11211 default port.
magento store front is working good with memcached correctly.
I am curious to find out the statistics of memcached server. How may cache session miss/hit happen on server and other statistcs too. I have used the following command to see it
$ echo "stats settings" | nc localhost 11211
STAT maxbytes 67108864
STAT maxconns 1024
STAT tcpport 11211
STAT udpport 11211
STAT inter 127.0.0.1
STAT verbosity 0
STAT oldest 0
STAT evictions on
STAT domain_socket NULL
STAT umask 700
STAT growth_factor 1.25
STAT chunk_size 48
STAT num_threads 4
STAT stat_key_prefix :
STAT detail_enabled no
STAT reqs_per_event 20
STAT cas_enabled yes
STAT tcp_backlog 1024
STAT binding_protocol auto-negotiate
STAT auth_enabled_sasl no
STAT item_size_max 1048576
END
Can any one help me to find out what commands or procedure i 'll use to see my memcached daemon cache miss/hit.
There's a really nice tool that will show you all the stats you need :
http://code.google.com/p/phpmemcacheadmin/
Really easy to install and configure.

Visualising Memcached RAM consumption over time

I'd like to visualise the RAM usage of a Memcached daemon - what is the best utility to use?
Ideally I'd like to user Perl.
Memcache reports a number of statistics such as memory used, objects stored, hits and misses. Connect to the server (probably localhost:11211) with a standard TCP socket and write "stats\n" to get back a list of statistics. See below for an example.
Look at Cacti for actually graphing the data. I've had great success with it.
> $ telnet localhost 11211
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
stats
STAT pid 75723
STAT uptime 4166691
STAT time 1236609062
STAT version 1.2.4
STAT pointer_size 32
STAT rusage_user 115.028511
STAT rusage_system 326.163351
STAT curr_items 83335
STAT total_items 1822140
STAT bytes 239997834
STAT curr_connections 48
STAT total_connections 7840
STAT connection_structures 83
STAT cmd_get 4273541
STAT cmd_set 1822140
STAT get_hits 2442609
STAT get_misses 1830932
STAT evictions 1696494
STAT bytes_read 5162992092
STAT bytes_written 7000049654
STAT limit_maxbytes 268435456
STAT threads 1
END
Take a look at Amnesia (http://github.com/benschwarz/amnesia/tree/master), it probably has something to do what you want.
if you want to build your own little tool you should check out RRDtool