Memcache `delete_misses` are very high compared to `delete_hits` while `evictions` are 0 - memcached

I am using Memcached for caching html for my project. But most of the pages cached are not extracted and result in server timeout. Also the delete_misses are at pretty higher end as compared to delete_hits while evictions are 0.
Here are memcached stats:
STAT pid 18323
STAT uptime 384753
STAT time 1468390067
STAT version 1.4.27
STAT libevent 1.4.13-stable
STAT pointer_size 64
STAT rusage_user 75.178571
STAT rusage_system 31.052279
STAT curr_connections 10
STAT total_connections 9517
STAT connection_structures 25
STAT reserved_fds 20
STAT cmd_get 9410
STAT cmd_set 991
STAT cmd_flush 0
STAT cmd_touch 0
STAT get_hits 7788
STAT get_misses 1622
STAT get_expired 265
STAT delete_misses 18439
STAT delete_hits 117
STAT incr_misses 0
STAT incr_hits 0
STAT decr_misses 0
STAT decr_hits 0
STAT cas_misses 0
STAT cas_hits 0
STAT cas_badval 0
STAT touch_hits 0
STAT touch_misses 0
STAT auth_cmds 0
STAT auth_errors 0
STAT bytes_read 45007488
STAT bytes_written 321441436
STAT limit_maxbytes 1073741824
STAT accepting_conns 1
STAT listen_disabled_num 0
STAT time_in_listen_disabled_us 0
STAT threads 4
STAT conn_yields 0
STAT hash_power_level 16
STAT hash_bytes 524288
STAT hash_is_expanding 0
STAT malloc_fails 0
STAT log_worker_dropped 0
STAT log_worker_written 0
STAT log_watcher_skipped 0
STAT log_watcher_sent 0
STAT bytes 12134672
STAT curr_items 266
STAT total_items 991
STAT expired_unfetched 188
STAT evicted_unfetched 0
STAT evictions 0
STAT reclaimed 340
STAT crawler_reclaimed 0
STAT crawler_items_checked 0
STAT lrutail_reflocked 0
END

Assuming that key are correct, that might happened because items were expired.

Related

mount - can't find /dev/sdc in fstab error

I'm trying to mount xfs disk.
I prepared directories.
sudo mkdir /grid/;
sudo mkdir /grid/0;
then I tried
sudo mount -t xfs /dev/sdc /grid/0;
But I always receive
can't find /dev/sdc /grid/0 in /etc/fstab
My fstab looks like this
#
# /etc/fstab
# Created by anaconda on Tue Dec 18 11:05:19 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root / xfs defaults 0 0
UUID=xxx /boot xfs defaults 0 0
UUID=zzz /boot/efi vfat umask=0077,shortname=winnt 0 0
/dev/mapper/rhel-home /home xfs defaults 0 0
/dev/mapper/rhel-var_log /var/log xfs defaults 0 0
/dev/mapper/rhel-swap swap swap defaults 0 0
/dev/sdc /grid/0 xfs defaults,noatime 0 0
/dev/sdd /grid/1 xfs defaults,noatime 0 0
/dev/sde /grid/2 xfs defaults,noatime 0 0
/dev/sdf /grid/3 xfs defaults,noatime 0 0
/dev/sdg /grid/4 xfs defaults,noatime 0 0
/dev/sdh /grid/5 xfs defaults,noatime 0 0
/dev/sdi /grid/6 xfs defaults,noatime 0 0
/dev/sdj /grid/7 xfs defaults,noatime 0 0
is something wrong with my fstab ? I can't find what I did wrong.
edit: lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 447.1G 0 disk
\u251c\u2500sda1 8:1 0 200M 0 part /boot/efi
\u251c\u2500sda2 8:2 0 1G 0 part /boot
\u2514\u2500sda3 8:3 0 445.9G 0 part
\u251c\u2500rhel-root 253:0 0 361.9G 0 lvm /
\u251c\u2500rhel-swap 253:1 0 4G 0 lvm [SWAP]
\u251c\u2500rhel-home 253:2 0 50G 0 lvm /home
\u2514\u2500rhel-var_log 253:3 0 30G 0 lvm /var/log
sdc 8:32 0 3.7T 0 disk
sdd 8:48 0 3.7T 0 disk
sde 8:64 0 3.7T 0 disk
sdf 8:80 0 3.7T 0 disk
sdg 8:96 0 3.7T 0 disk
sdh 8:112 0 3.7T 0 disk
sdi 8:128 0 3.7T 0 disk
sdj 8:144 0 3.7T 0 disk
The issue was caused by non-breakable space which was mixed in fstab records and in mount command.
After converting text to ASCII and checking on "Show All Characters" in notepad++ I replaced all non-breakable space with ordinary space and command finished successfully.

MongoDB: Increasing read speed on a single machine

I use MongoDB to store price events for stocks. Depending on what you want to screen, the number of event can rapidly grow to 1Go-2Go.
I run MongoDB on a single machine and it is taking longer and longer to load the data. I am not able to find a clear answer on the web if "sharding on a single server" is a benefit to read speed.
Is it the right path to increase the read speed?
insert query update delete getmore command flushes mapped vsize res faults locked db
0 1 395 0 1 395 0 63.9g 128g 2.53g 7484 prices:70.5%
0 0 14726 0 5 14728 0 63.9g 128g 2.48g 31555 prices:7.4%
0 0 0 0 0 1 0 63.9g 128g 2.48g 436 prices:0.0%
0 0 0 0 1 1 0 63.9g 128g 2.48g 0 prices:0.0%
0 0 0 0 0 1 1 63.9g 128g 2.49g 3877 .:83.9%
0 0 0 0 0 1 0 63.9g 128g 2.49g 0 prices:0.0%

Avoid read-only forked() RAM allocation on exit in Perl

In Perl, I generate a huge read-only data-structure once, then fork().
This is to take advantage of COW on RSS pages when forking. It works really well, but when a child process exits, it allocates all the RAM from itelf just prior dying.
Is there a way to avoid this useless allocation ?
Here is sample Perl code that shows the issue.
#! /usr/bin/perl
my $a = [];
# Allocate 100 MiB
for my $i (1 .. 100000) {
push #$a, "x" x 1024;
}
# Fork 10 other process
for my $j (1 .. 10) {
last unless fork();
}
# Sleep for a while to be able to see the RSS
sleep(5);
In the sample vmstat output, we can see that it first allocates only 100MiB, then after the 1rst sleep it allocates the whole for a short while, and then releases all of it.
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 1329660 80596 86936 0 0 21 18 160 25 0 0 100 0 0
1 0 0 1328048 80596 86936 0 0 0 0 1013 44 0 0 100 0 0
0 0 0 1223888 80596 86936 0 0 0 0 1028 76 11 5 84 0 0
0 0 0 1223888 80596 86936 0 0 0 0 1010 40 0 0 100 0 0
0 0 0 1223888 80596 86936 0 0 0 0 1026 54 0 0 100 0 0
0 0 0 1223888 80596 86936 0 0 0 0 1006 39 0 0 100 0 0
13 0 0 741156 80596 86936 0 0 0 0 1012 66 13 58 28 0 0
0 0 0 1329288 80596 86936 0 0 0 0 1032 60 0 0 100 0 0
Note: it seems it isn't a Perl version specific issue. As I tested 5.8.8, 5.10.1 & 5.14.2 and they all do exhibit this behavior.
Update:
As #choroba asked in comments, I also tried to undef the data-structure, but it seems that it triggers the memory-touching as the RAM is then allocated.
You can add the following snippet at the end of the first script.
# Unallocate $a
undef $a;
# Sleep for a while to be able to see the RSS
sleep(5);
Actually, as I found out myself, this behavior is a feature, and the answer lies in the Perl doc:
The exit() function does not always exit immediately.
Likewise any object destructors that need to be called
are called before the real exit.
If this is a problem, you can
call POSIX::_exit($status) to avoid END and destructor processing.
And indeed, adding it at the end of the original code sample does avoid the behavior.
# XXX - To be added just before ending the process
# Use POSIX::_exit($status) to end without allocating copy-on-write RAM
use POSIX;
POSIX::_exit(0);
Note: for this to work, the child has to exit also before the data-structure goes out of scope.

"netIn" in mongostat output

I have wrote a test script which did millions of updates(using update query) in a collection. Following is the mongostat output
insert query update delete getmore command flushes mapped vsize res faults locked % idx miss % qr|qw ar|aw netIn netOut conn time
0 0 21156 0 0 1 0 208m 2.45g 119m 0 81.7 0 0|8 0|9 2m 1k 10 12:52:11
0 0 20620 0 0 1 0 208m 2.45g 119m 0 82.5 0 0|8 0|9 1m 1k 10 12:52:12
0 0 21915 0 0 1 0 208m 2.45g 119m 0 81.9 0 0|8 0|9 2m 1k 10 12:52:13
0 0 21634 0 0 1 0 208m 2.45g 119m 0 82.1 0 0|8 0|9 2m 1k 10 12:52:15
0 0 19793 0 0 1 0 208m 2.45g 119m 0 81.8 0 0|8 0|9 1m 1k 10 12:52:16
0 0 22062 0 0 1 0 208m 2.45g 119m 0 81.9 0 0|8 0|8 2m 1k 10 12:52:17
0 0 23395 0 0 1 0 208m 2.45g 119m 0 81.9 0 0|8 0|8 2m 1k 10 12:52:19
The netIn says the total network in bytes per second, i hope. Is there any way to increase the size of netIn to some mb, so that i can increase the update statement per second.
I don't think you understand the netIn statistic. It isn't some limit but the actual amount of data received by MongoDB per interval sample. In other words, the netIn value will increase if you (can) do more updates.
Increasing update throughput itself may be possible but is very application specific.

Memcached using more than max memory

i have an installation on memcache which i want to use in my production environment but when i have ran a couple of tests it seems that memcache doesn't free up memory even after it has used up all of it allocated memory, Also i logged in and ran a flush_all command but the objects are still in the cache.
Here are outputs from some tests
memcached-tool
memcache-top v0.6 (default port: 11211, color: on, refresh: 3 seconds)
INSTANCE USAGE HIT % CONN TIME EVICT/s READ/s WRITE/s
127.0.0.1:11211 427.1% 0.0% 18 1.4ms 0.0 244 261.0K
AVERAGE: 427.1% 0.0% 18 1.4ms 0.0 244 261.0K
TOTAL: 4.3MB/ 1.0MB 18 1.4ms 0.0 244 261.0K
memcached-tool 127.0.0.1:11211 display
No Item_Size Max_age Pages Count Full? Evicted Evict_Time OOM
1 560B 4s 1 1872 yes 0 0 15488
2 704B 32s 1 559 no 0 0 0
3 880B 4s 1 1191 yes 0 0 1335
4 1.1K 9s 1 116 no 0 0 0
5 1.4K 21s 1 14 no 0 0 0
6 1.7K 4s 1 17 no 0 0 0
7 2.1K 84s 1 24 no 0 0 0
8 2.7K 130s 1 60 no 0 0 0
9 3.3K 25s 1 290 no 0 0 0
10 4.2K 9s 1 194 no 0 0 0
11 5.2K 9s 1 116 no 0 0 0
15 12.7K 816s 1 1 no 0 0 0
16 15.9K 769s 1 5 no 0 0 0
18 24.8K 786s 1 1 no 0 0 0
21 48.5K 816s 1 1 no 0 0 0
memcached-tool 127.0.0.1:11211 stats
127.0.0.1:11211 Field Value
accepting_conns 1
auth_cmds 0
auth_errors 0
bytes 4478060
bytes_read 23964596
bytes_written 546642860
cas_badval 0
cas_hits 0
cas_misses 0
cmd_flush 0
cmd_get 240894
cmd_set 4504
conn_yields 0
connection_structures 21
curr_connections 18
curr_items 4461
decr_hits 0
decr_misses 0
delete_hits 0
delete_misses 0
evictions 0
get_hits 43756
get_misses 197138
incr_hits 0
incr_misses 0
limit_maxbytes 1048576
listen_disabled_num 0
pid 8731
pointer_size 64
reclaimed 0
rusage_system 5.047232
rusage_user 4.311344
threads 4
time 1306247929
total_connections 3092
total_items 4504
uptime 1240
version 1.4.5
-m tells memcached how much RAM to use for item storage (in megabytes). Note
carefully that this isn't a global
memory limit, so memcached will use a
few % more memory than you tell it to.
Set this to safe values. Setting it to
less than 48 megabytes does not work
properly in 1.4.x and earlier. It will
still use the memory.
Source: https://github.com/memcached/memcached/wiki/ConfiguringServer#commandline-arguments