Log Memcached Activity - memcached

I'm looking to log all activity going on in my memcached server. All reads and writes.
This is going to be used as a distributed daemon for lots of remote php apps in the cloud and need a way to SSH in and check out the activity going on, on the daemon.
I've googled extensively and can't find a single way to do this.
The Redis equivalent would be logging into the interactive console and typing MONITOR.
Thanks in advance.

You can do this using memcached's telnet stats command as such :
enable capturing information : stats detail on
wait a while
disable capturing : stats detail off
dump the information : stats detail dump
telnet yourhost 11211 and run the sequence above. Note that this will impact greatly the performance.
Also you could check out phpmemcacheadmin - it's a really nice tool for monitoring memcached pools.

You can start your memcached using the -vv parameter:
-vv very verbose (also print client commands/reponses)
The output is similar to this:
Dec 12 10:33:12 hostname memcached[18350]: <33 new auto-negotiating client connection
Dec 12 10:33:12 hostname memcached[18350]: 33: Client using the ascii protocol
Dec 12 10:33:12 hostname memcached[18350]: <33 set your.key.1 3 300 525
Dec 12 10:33:12 hostname memcached[18350]: >33 STORED
Dec 12 10:33:12 hostname memcached[18350]: <33 get your.key.2
Dec 12 10:33:12 hostname memcached[18350]: >33 sending key your.key.2
Dec 12 10:33:12 hostname memcached[18350]: >33 END
The output is sent to stdout, so you might want to redirect it to some file.

Use supervisord to run memcached. It'll log the memcached output (based on the 1,2 or 3 -v's) to syslog - so log rotate should work (it won't if you just pipe output to a file) and you can use syslog to send all the logs to a central logging machine (using something like graylog2 to take all the log info).

Related

Can syslog pri value can be negative?

First i will tell you my architecture
client--->haproxy--->syslog-ng--->kafka
the client is Cisco ASA and haproxy is server for load-balancing and syslog-ng is for receiving ,filtering and sending logs to kafka(destination)
The client sends logs to haproxy and haproxy send logs to syslog-ng using tcp transport
As in tcp the client-server timeout breaks whenever client restored the connection its PRI value is negative which we seeing in wireshark.With this issue the messages gets mixup
Connection restored is normal but PRI value is negative this is incorrect.
I am showing you the the logs
<-1>May 24 2021 17:40:28: %ASA--1-6414004: TCP Syslog Server private:xx.xx.xx.xx/1470 -
Connection restored\\nCAL\\\\John Mike/xxxxxxxxxxxxxxxxxx) to private:xx.xx.xx.xx/xx duration 0:00:00 bytes 142
(John Mike/xxxxxxxxxxxxxxxxxx)\\nxxxxxxx)\\n4 2021 17:40:28: %ASA-6-302016: Teardown UDP connection 1733810491
we've increase the client connection timeout from 1min to 12 hr but the problem is not resolved
Some version of the Cisco ASA TCP Syslog code are affected by bug CSCvz85683:
Symptom:
Wrong syslog message format, ex for 414004:
-1>Sep 08 2021 10:46:25: %ASA--1-6414004: TCP Syslog Server private:xx.xx.xx.xx/1470 - Connection restored\n (xx.xx.xx.xx/64437)
Conditions:
External logging to TCP server is enabled
Workaround:
NA
Further Problem Description:
ASA syslog messages have 6-digit ID
The valid range for message IDs is between 100000 and 999999.
Source: Cisco ASA Series Syslog Messages. About ASA Syslog Messages.
When logging via TCP on versions with the defect code, will shift the priority (6 in this case) into the message code (414004 in this case) and use an illegal priority -1.
According to the bug, this has been fixed in version 9.14.4.

Zookeeper status - telnet connections: 4

Could someone help me to understand is it required to have 4 connections for zookeeper.
My requirement is simple - I want to run a apache kafka with spark in my local machine. As per the kafka documentation I had started the zookeeper under the kafka bin and wanted to confirm if my zookeeper is up.
So, tried "telnet localhost 2181" from the command prompt.
And got the below ouput:
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
stats
Zookeeper version: 3.4.5--1, built on 06/10/2013 17:26 GMT
Clients:
/127.0.0.1:34231[1](queued=0,recved=436,sent=436)
/127.0.0.1:34230[1](queued=0,recved=436,sent=436)
/127.0.0.1:37719[0](queued=0,recved=1,sent=0)
/127.0.0.1:34232[1](queued=0,recved=436,sent=436)
Latency min/avg/max: 0/0/42
Received: 2127
Sent: 2136
Connections: 4
Outstanding: 0
Zxid: 0x143
Mode: standalone
Node count: 51
Connection closed by foreign host.
I would like to know as why the connection is saying 4 with 4 clients. what does that actually mean?
Thank you in advance to help me understand if 4 clients are required.
I would like to know as why the connection is saying 4 with 4 clients. what does that actually mean?
It means there are currently four connections open to zookeeper. This connection:
/127.0.0.1:37719[0](queued=0,recved=1,sent=0)
is your telnet localhost 2181 connection.

HAProxy not running stats socket

I installed haproxy from aur in Arch Linux and modified the config file a bit:
global
maxconn 20000
log 127.0.0.1 local0
user haproxy
stats socket /run/haproxy/haproxy.sock mode 660 level admin
stats timeout 30s
chroot /usr/share/haproxy
pidfile /run/haproxy.pid
daemon
defaults
mode http
stats enable
stats uri /stats
stats realm Haproxy\ Statistics
frontend www-http
bind 127.0.0.1:80
default_backend www-backend
backend www-backend
mode http
balance roundrobin
timeout connect 5s
timeout server 30s
timeout queue 30s
server app1 127.0.0.1:5001 check
server app2 127.0.0.1:5002 check
I have made sure that the directory /run/haproxy exists and has permissions for the user haproxy to write to it:
ツ ls -al /run/haproxy
total 0
drwxr-xr-x 2 haproxy root 40 May 13 21:37 .
drwxr-xr-x 27 root root 720 May 13 22:00 ..
When I launch haproxy using systemctl start haproxy.service, it loads fine. I can even go to the /stats page and view stats, however, socat reports the following error:
ツ sudo socat unix-connect:/run/haproxy/haproxy.sock stdio
2016/05/13 22:04:11 socat[24202] E connect(5, AF=1 "/run/haproxy/haproxy.sock", 27): No such file or directory
I am at wits end and not able to understand what is happening. This is what I get from journalctl -xe:
May 13 21:56:31 rohanarch.local systemd[1]: Starting HAProxy Load Balancer...
May 13 21:56:31 rohanarch.local systemd[1]: Started HAProxy Load Balancer.
May 13 21:56:31 rohanarch.local haproxy-systemd-wrapper[20454]: haproxy-systemd-wrapper: executing /usr/bin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
May 13 21:56:31 rohanarch.local haproxy-systemd-wrapper[20454]: [WARNING] 133/215631 (20456) : config : missing timeouts for frontend 'www-http'.
May 13 21:56:31 rohanarch.local haproxy-systemd-wrapper[20454]: | While not properly invalid, you will certainly encounter various problems
May 13 21:56:31 rohanarch.local haproxy-systemd-wrapper[20454]: | with such a configuration. To fix this, please ensure that all following
May 13 21:56:31 rohanarch.local haproxy-systemd-wrapper[20454]: | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
Basically, no errors/warnings or not even so much as an indication about the stats socket. Others who have faced a problem with the stats socket fail to get haproxy started. In my case, it starts up fine, but the socket just isn't creating.
You need to manually create the directory yourself. Please ensure
/run/haproxy exists. If it doesn't, then first create it with:
sudo mkdir /run/haproxy
This should resolve your issue.
try to make selinux permissive with the command belowe and restart HAproxy service.
selinux command

nmap seems to miss ports: doing something wrong?

If I specify a port range and scan for open ports such as the below
range, I get no result, even though ports (per netstat) are clearly
open and listening for web activity in this range:
[me#box ~]$ ./nmap --open -A --script ssl-enum-ciphers.nse,ssl-cert.nse -p [10050-65535] w.x.y.z
Starting Nmap 7.01 at 2016-01-21 16:24 CST
Service detection performed. Please report any incorrect results at http.../submit/ .
Nmap done: 1 IP address (1 host up) scanned in 0.63 seconds
See above: nothing reported!
But if I scan a specific port in that same range (the same way), I get the
result I'd expect:
[me#box ~]$ ./nmap --open -A --script ssl-enum-ciphers.nse,ssl-cert.nse -p 10050 w.x.y.z
Starting Nmap 7.01 ( ) at 2016-01-21 16:24 CST
Nmap scan report for box-name (w.x.y.z)
Host is up (0.00010s latency).
PORT STATE SERVICE VERSION
10050/tcp open http Apache httpd
|_http-server-header: Apache
Service detection performed. Please report any incorrect results at ht.../submit/ .
Nmap done: 1 IP address (1 host up) scanned in 11.74 seconds
What's wrong? Why doesn't it report that port (and some others) in the results
from the first command line? The second command line is the same except it
specifies a specific port that's known to be open (and output proves it is indeed open).
Makes no sense to me. Any advice?
This is nmap 7.01 if it matters.
Also I'm scanning the local box itself with its own specific IP address on which
the https ports are up and listening. (Not a scan of some other remote machine).
Using brackets around the port list means "Only scan ports if they occur in the services file." The nmap-services file that comes with Nmap does not contain a reference to port 10050, so that port is not scanned. In fact, you can see just which ports are scanned by using Grepable output and the -v flag:
$ ./nmap -p [10050-65535] -oG - -v
# Nmap 7.01SVN scan initiated Fri Jan 22 01:59:37 2016 as: ./nmap -p [10050-65535] -oG - -v
# Ports scanned: TCP(1371;10058,10064,10082-10083,10093,10101,10115,10160,10180,10215,10238,10243,10245-10246,10255,10280,10338,10347,10357,10387,10414,10443,10494,10500,10509,10529,10535,10550-10556,10565-10567,10601-10602,10616-10617,10621,10626,10628-10629,10699,10754,10778,10842,10852,10873,10878,10900,11000-11001,11003,11007,11019,11026,11031-11033,11089,11100,11110-11111,11180,11200,11224,11250,11288,11296,11371,11401,11552,11697,11735,11813,11862-11863,11940,11967,12000-12002,12005-12006,12009,12019,12021,12031,12034,12059,12077,12080,12090,12096-12097,12121,12132,12137,12146,12156,12171,12174,12192,12215,12225,12240,12243,12251,12262,12265,12271,12275,12296,12340,12345-12346,12380,12414,12452,12699,12702,12766,12865,12891-12892,12955,12962,13017,13093,13130,13132,13140,13142,13149,13167,13188,13192-13194,13229,13250,13261,13264-13265,13306,13318,13340,13359,13456,13502,13580,13695,13701,13713-13715,13718,13720-13724,13730,13766,13782-13784,13846,13899,14000-14001,14141,14147,14218,14237-14238,14254,14418,14441-14444,14534,14545,14693,14733,14827,14891,14916,15000-15005,15050,15145,15151,15190-15191,15275,15317,15344,15402,15448,15550,15631,15645-15646,15660,15670,15677,15722,15730,15742,15758,15915,16000-16001,16012,16016,16018,16048,16080,16113,16161,16270,16273,16283,16286,16297,16349,16372,16444,16464,16705,16723-16725,16797,16800,16845,16851,16900-16901,16992-16993,17007,17016-17017,17070,17089,17129,17251,17255,17300,17409,17413,17500,17595,17700-17702,17715,17801-17802,17860,17867,17877,17969,17985,17988,17997,18000,18012,18015,18018,18040,18080,18101,18148,18181-18184,18187,18231,18264,18333,18336-18337,18380,18439,18505,18517,18569,18669,18874,18887,18910,18962,18988,19010,19101,19130,19150,19200-19201,19283,19315,19333,19350,19353,19403,19464,19501,19612,19634,19715,19780,19801,19842,19852,19900,19995-19996,20000-20002,20005,20011,20017,20021,20031-20032,20039,20052,20076,20080,20085,20089,20102,20106,20111,20118,20125,20127,20147,20179-20180,20221-20228,20280,20473,20734,20828,20883,20934,20940,20990,21011,21078,21201,21473,21571,21631,21634,21728,21792,21891,21915,22022,22063,22100,22125,22128,22177,22200,22222-22223,22273,22290,22341,22350,22555,22563,22711,22719,22727,22769,22882,22939,22959,22969,23017,23040,23052,23219,23228,23270,23296,23342,23382,23430,23451,23502,23723,23796,23887,23953,24218,24392,24416,24444,24552,24554,24616,24800,24999-25001,25174,25260,25262,25288,25327,25445,25473,25486,25565,25703,25717,25734-25735,25847,26000-26001,26007,26208,26214,26340,26417,26470,26669,26972,27000-27003,27005,27007,27009-27010,27015-27019,27055,27074-27075,27087,27204,27316,27350-27353,27355-27357,27372,27374,27521,27537,27665,27715,27770,28017,28114,28142,28201,28211,28374,28567,28717,28850-28851,28924,28967,29045,29152,29243,29507,29672,29810,29831,30000-30001,30005,30087,30195,30299,30519,30599,30644,30659,30704-30705,30718,30896,30951,31033,31038,31058,31072,31337,31339,31386,31416,31438,31522,31657,31727-31728,32006,32022,32031,32088,32102,32200,32219,32260-32261,32764-32765,32767-32792,32797-32799,32803,32807,32814-32816,32820,32822,32835,32837,32842,32858,32868-32869,32871,32888,32897-32898,32904-32905,32908,32910-32911,32932,32944,32960-32961,32976,33000,33011,33017,33070,33087,33124,33175,33192,33200,33203,33277,33327,33335,33337,33354,33367,33395,33444,33453,33522-33523,33550,33554,33604-33605,33841,33879,33882,33889,33895,33899,34021,34036,34096,34189,34317,34341,34381,34401,34507,34510,34571-34573,34683,34728,34765,34783,34833,34875,35033,35050,35116,35131,35217,35272,35349,35392-35393,35401,35500,35506,35513,35553,35593,35731,35879,35900-35901,35906,35929,35986,36046,36104-36105,36256,36275,36368,36436,36508,36530,36552,36659,36677,36694,36710,36748,36823-36824,36914,36950,36962,36983,37121,37151,37174,37185,37218,37393,37522,37607,37614,37647,37674,37777,37789,37839,37855,38029,38037,38185,38188,38194,38205,38224,38270,38292,38313,38331,38358,38446,38481,38546,38561,38570,38761,38764,38780,38805,38936,39067,39117,39136,39265,39293,39376,39380,39433,39482,39489,39630,39659,39732,39763,39774,39795,39869,39883,39895,39917,40000-40003,40005,40011,40193,40306,40393,40400,40457,40489,40513,40614,40628,40712,40732,40754,40811-40812,40834,40911,40951,41064,41123,41142,41250,41281,41318,41342,41345,41348,41398,41442,41511,41523,41551,41632,41773,41794-41795,41808,42001,42035,42127,42158,42251,42276,42322,42449,42452,42510,42559-42560,42575,42590,42632,42675,42679,42685,42735,42906,42990,43000,43002,43018,43027,43103,43139,43143,43188,43212,43231,43242,43425,43654,43690,43734,43823,43868,44004,44101,44119,44176,44200,44334,44380,44410,44431,44442-44443,44479,44501,44505,44541,44616,44628,44704,44709,44711,44965,44981,45038,45050,45100,45136,45164,45220,45226,45413,45438,45463,45602,45624,45697,45777,45864,45960,46034,46069,46115,46171,46182,46200,46310,46372,46418,46436,46593,46813,46992,46996,47012,47029,47119,47197,47267,47348,47372,47448,47544,47557,47567,47581,47595,47624,47634,47700,47777,47806,47850,47858,47860,47966,47969,48009,48067,48080,48083,48127,48153,48167,48356,48434,48619,48631,48648,48682,48783,48813,48925,48966-48967,48973,49002,49048,49132,49152-49161,49163-49173,49175-49176,49179,49186,49189-49191,49195-49197,49201-49204,49211,49213,49216,49228,49232,49235-49236,49241,49275,49302,49352,49372,49398,49400-49401,49452,49498,49500,49519-49522,49597,49603,49678,49751,49762,49765,49803,49927,49999-50003,50006,50016,50019,50040,50050,50101,50189,50198,50202,50205,50224,50246,50258,50277,50300,50356,50389,50500,50513,50529,50545,50576-50577,50585,50636,50692,50733,50787,50800,50809,50815,50831,50833-50836,50849,50854,50887,50903,50945,50997,51011,51020,51037,51067,51103,51118,51139,51191,51233-51235,51240,51300,51343,51351,51366,51413,51423,51460,51484-51485,51488,51493,51515,51582,51658,51771-51772,51800,51809,51906,51909,51961,51965,52000-52003,52025,52046,52071,52173,52225-52226,52230,52237,52262,52391,52477,52506,52573,52660,52665,52673,52675,52710,52735,52822,52847-52851,52853,52869,52893,52948,53085,53178,53189,53211-53212,53240,53313-53314,53319,53361,53370,53460,53469,53491,53535,53633,53639,53656,53690,53742,53782,53827,53852,53910,53958,54045,54075,54101,54127,54235,54263,54276,54320-54321,54323,54328,54514,54551,54605,54658,54688,54722,54741,54873,54907,54987,54991,55000,55020,55055-55056,55183,55187,55227,55312,55350,55382,55400,55426,55479,55527,55555-55556,55568-55569,55576,55579,55600,55635,55652,55684,55721,55758,55773,55781,55901,55907,55910,55948,56016,56055,56259,56293,56507,56535,56591,56668,56681,56723,56725,56737-56738,56810,56822,56827,56973,56975,57020,57103,57123,57294,57325,57335,57347,57350,57352,57387,57398,57479,57576,57665,57678,57681,57702,57730,57733,57797,57891,57896,57923,57928,57988,57999,58001-58002,58072,58080,58107,58109,58164,58252,58305,58310,58374,58430,58446,58456,58468,58498,58562,58570,58610,58622,58630,58632,58634,58699,58721,58838,58908,58970,58991,59087,59107,59110,59122,59149,59160,59191,59200-59202,59239,59340,59499,59504,59509-59510,59525,59565,59684,59778,59810,59829,59841,59987,60000,60002-60003,60020,60055,60086,60111,60123,60146,60177,60227,60243,60279,60377,60401,60403,60443,60485,60492,60504,60544,60579,60612,60621,60628,60642,60713,60728,60743,60753,60782-60783,60789,60794,60989,61159,61169-61170,61402,61473,61516,61532,61613,61616-61617,61669,61722,61734,61827,61851,61900,61942,62006,62042,62078,62080,62188,62312,62519,62570,62674,62866,63105,63156,63331,63423,63675,63803,64080,64127,64320,64438,64507,64551,64623,64680,64726-64727,64890,65000,65048,65129,65301,65310-65311,65389,65488,65514) UDP(0;) SCTP(0;) PROTOCOLS(0;)
WARNING: No targets were specified, so 0 hosts scanned.
# Nmap done at Fri Jan 22 01:59:37 2016 -- 0 IP addresses (0 hosts up) scanned in 0.10 seconds
That shows 1371 scanned ports out of 55486 in the range you gave. Note that no packets were sent in this command: it's a nice way to see exactly which ports you will scan (like the default 1000, or the top 100 with -F, or some other list with --top-ports).

lmtpd: failed to mmap file /var/lib/imap/deliver.db.NEW (in reply to end of DATA command)

Good day!
After installing and running kolab letters delivered instantly. But after a few days letters to local destinations have become delivered with a delay. Over time, they are delivered, but the delay may be several hours. An example of the path of the letter:
root#myhost:~# cat /var/log/mail.log | grep 7AA7935B1FC
Jan 12 11:31:03 myhost postfix/smtpd[19494]: 7AA7935B1FC:
client=localhost[127.0.0.1]
Jan 12 11:31:05 myhost postfix/cleanup[19492]: 7AA7935B1FC:
message-id=<20160112093103.7AA7935B1FC#mail.myhost.com>
Jan 12 11:31:05 myhost postfix/qmgr[7021]: 7AA7935B1FC:
from=<noreply#myhost.com>, size=1279, nrcpt=3 (queue active)
Jan 12 11:31:05 myhost lmtpunix[19631]: Delivered:
<20160112093103.7AA7935B1FC#mail.myhost.com> to mailbox:
myhost.com!user.user1
Jan 12 11:31:06 myhost postfix/lmtp[19617]: 7AA7935B1FC: to=<user1#myhost.com>, relay=mail.myhost.com[/var/lib/imap/socket/lmtp], delay=2.6, delays=2/0.01/0/0.59, dsn=4.3.0, status=deferred (host
mail.myhost.com[/var/lib/imap/socket/lmtp] said: 421 4.3.0 lmtpd:
failed to mmap /var/lib/imap/deliver.db.NEW file (in reply to end of
DATA command))
Jan 12 11:31:06 myhost postfix/lmtp[19617]: 7AA7935B1FC: to=<user2#myhost.com>, relay=mail.myhost.com[/var/lib/imap/socket/lmtp], delay=2.7, delays=2/0.01/0/0.68, dsn=4.4.2, status=deferred (lost connection with mail.myhost.com[/var/lib/imap/socket/lmtp] while sending end of data
-- message may be sent more than once
Jan 12 11:31:07 myhost postfix/lmtp[19617]: 7AA7935B1FC: to=<user3#myhost.com>, relay=mail.myhost.com[/var/lib/imap/socket/lmtp], delay=2.7, delays=2/0.01/0/0.68, dsn=4.4.2, status=deferred (lost connection with mail.myhost.com[/var/lib/imap/socket/lmtp] while sending end of data
-- message may be sent more than once)
Currently mailq features a variety of messages in queue. An example of one of these:
7BBDF35B123 6162 Tue Jan 12 13:19:24 user#rambler.ru (delivery temporarily suspended: lost connection with mail.myhost.com[/var/lib/imap/socket/lmtp] while sending end of data -- message may be sent more than once) user4#myhost.com
-- 11667 Kbytes in 327 Requests.
I think that the main reason is described here:
lmtp: failed to mmap /var/lib/imap/deliver.db.NEW file
But, unfortunately, not been able to find a solution.
The problem was solved according to this recommendation: http://lists.kolab.org/pipermail/users-de/2015-May/001998.html
Stop Services cyrus-imap and postfix
Delete files deliver.db.NEW and deliver.db in the directory /var/lib/imap/
Start the services and the file deliver.db is automatically created
Restart the queue: postsuper -r ALL
Some of the letters delivered from the queue again.
Proposed cause: after installing and start services on the new server users download messages en masse in the format *.eml, downloaded from the last post. Perhaps these actions somehow overflowed index files.
P.S.: Unfortunately, the solution was temporary: the situation described above is repeated periodically :(