I have a golang program running on centos that usually has around 5k tcp clients connected. Every once in a while this number goes to around 15k for about an hour and still everything is fine.
The program has a slow shutdown mode where it stops accepting new clients and slowly kills all currently connected clients over the course of 20 mins. During these slow shutdown periods if the machine has 15k clients, sometimes I get:
Wed Oct 31 21:28:23 2018] net_ratelimit: 482 callbacks suppressed
[Wed Oct 31 21:28:23 2018] TCP: too many orphaned sockets
[Wed Oct 31 21:28:23 2018] TCP: too many orphaned sockets
[Wed Oct 31 21:28:23 2018] TCP: too many orphaned sockets
I have tried adding:
echo "net.ipv4.tcp_max_syn_backlog=5000" >> /etc/sysctl.conf
echo "net.ipv4.tcp_fin_timeout=10" >> /etc/sysctl.conf
echo "net.ipv4.tcp_tw_recycle=1" >> /etc/sysctl.conf
echo "net.ipv4.tcp_tw_reuse=1" >> /etc/sysctl.conf
sysctl -f /etc/sysctl.conf
And these values are set, I see them with their correct new values. A typical sockstat is:
cat /proc/net/sockstat
sockets: used 31682
TCP: inuse 17286 orphan 5 tw 3874 alloc 31453 mem 15731
UDP: inuse 8 mem 3
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0
And ideas how to stop the too many orphaned socket error and crash? Should I increase the 20 min slow shutdown period to 40 mins? Increase tcp_mem? Thanks!
Related
Let's say my job was running for some time and it went to suspend state due to machine overloading and became running after sometime and got completed.
Now the status acquired by this job were RUNNING -> SUSPEND -> RUNNING
How to get all states acquired by a given job ?
bjobs -l If the job hasn't been cleaned from the system yet.
bhist -l Otherwise. You might need -n, depending on how old the job is.
Here's an example of bhist -l output when a job was suspended and later resumed because the system load temporarily exceeded the configured threshold.
$ bhist -l 1168
Job <1168>, User <mclosson>, Project <default>, Command <sleep 10000>
Fri Jan 20 15:08:40: Submitted from host <hostA>, to
Queue <normal>, CWD <$HOME>, Specified Hosts <hostA>;
Fri Jan 20 15:08:41: Dispatched 1 Task(s) on Host(s) <hostA>, Allocated 1 Slot(
s) on Host(s) <hostA>, Effective RES_REQ <select[type == any] or
der[r15s:pg] >;
Fri Jan 20 15:08:41: Starting (Pid 30234);
Fri Jan 20 15:08:41: Running with execution home </home/mclosson>, Execution CW
D </home/mclosson>, Execution Pid <30234>;
Fri Jan 20 16:19:22: Suspended: Host load exceeded threshold: 1-minute CPU ru
n queue length (r1m)
Fri Jan 20 16:21:43: Running;
Summary of time in seconds spent in various states by Fri Jan 20 16:22:09
PEND PSUSP RUN USUSP SSUSP UNKWN TOTAL
1 0 4267 0 141 0 4409
At 16:19:22 the jobs was suspended because r1m exceeded the threshold. Later at 16:21:43 the job resumes.
centos 6.7
postgresql 9.5.3
I've DB servers that are on master-standby replication.
Suddenly, standby server's postgresql process was stopped with this logs.
2016-07-14 18:14:19.544 JST [][5783e03b.3cdb][0][15579]WARNING: page 1671400 of relation base/16400/559613 is uninitialized
2016-07-14 18:14:19.544 JST [][5783e03b.3cdb][0][15579]CONTEXT: xlog redo Heap2/VISIBLE: cutoff xid 1902107520
2016-07-14 18:14:19.544 JST [][5783e03b.3cdb][0][15579]PANIC: WAL contains references to invalid pages
2016-07-14 18:14:19.544 JST [][5783e03b.3cdb][0][15579]CONTEXT: xlog redo Heap2/VISIBLE: cutoff xid 1902107520
2016-07-14 18:14:21.026 JST [][5783e038.3cd9][0][15577]LOG: startup process (PID 15579) was terminated by signal 6: Aborted
2016-07-14 18:14:21.026 JST [][5783e038.3cd9][0][15577]LOG: terminating any other active server processes
And, master server's postgresql logs were nothing special.
But, master server's /var/log/messages was listed as below.
Jul 14 05:38:44 host kernel: sbridge: HANDLING MCE MEMORY ERROR
Jul 14 05:38:44 host kernel: CPU 8: Machine Check Exception: 0 Bank 9: 8c000040000800c0
Jul 14 05:38:44 host kernel: TSC 0 ADDR 1f7dad7000 MISC 90004000400008c PROCESSOR 0:306e4 TIME 1468442324 SOCKET 1 APIC 20
Jul 14 05:38:44 host kernel: EDAC MC1: CE row 1, channel 0, label "CPU_SrcID#1_Channel#0_DIMM#1": 1 Unknown error(s): memory scrubbing on FATAL area : cpu=8 Err=0008:00c0 (ch=0), addr = 0x1f7dad7000 => socket=1, Channel=0(mask=1), rank=4
Jul 14 05:38:44 host kernel:
Jul 14 18:30:40 host kernel: sbridge: HANDLING MCE MEMORY ERROR
Jul 14 18:30:40 host kernel: CPU 8: Machine Check Exception: 0 Bank 9: 8c000040000800c0
Jul 14 18:30:40 host kernel: TSC 0 ADDR 1f7dad7000 MISC 90004000400008c PROCESSOR 0:306e4 TIME 1468488640 SOCKET 1 APIC 20
Jul 14 18:30:41 host kernel: EDAC MC1: CE row 1, channel 0, label "CPU_SrcID#1_Channel#0_DIMM#1": 1 Unknown error(s): memory scrubbing on FATAL area : cpu=8 Err=0008:00c0 (ch=0), addr = 0x1f7dad7000 => socket=1, Channel=0(mask=1), rank=4
Jul 14 18:30:41 host kernel:
The memory error's started at 1 week ago. So, I doubt the memory error causes postgresql's error.
My question is here.
1) Can memory error of kernel cause postgresql's "WAL contains references to invalid pages" error?
2) Why there is not any logs at master server's postgresql?
thx.
Faulty memory can cause all kinds of data corruption, so that seems like a good enough explanation to me.
Perhaps there are no log entries at the master PostgreSQL server because all that was corrupted was the WAL stream.
You can run
oid2name
to find out which database has OID 16400 and then
oid2name -d <database with OID 16400> -f 559613
to find out which table belongs to file 559613.
Is that table larger than 12 GB? If not, that would mean that page 1671400 is indeed an invalid value.
You didn't tell which PostgreSQL version you are using, but maybe there are replication bugs fixed in later versions that could cause replication problems even without a hardware bug present; read the release notes.
I would perform a new pg_basebackup and reinitialize the slave system.
But what I'd really be worried about is possible data corruption on the master server. Block checksums are cool (turned on if pg_controldata <data directory> | grep checksum gives you 1), but possibly won't detect the effects of memory corruption.
Try something like
pg_dumpall -f /dev/null
on the master and see if there are errors.
Keep your old backups in case you need to repair something!
Good day!
After installing and running kolab letters delivered instantly. But after a few days letters to local destinations have become delivered with a delay. Over time, they are delivered, but the delay may be several hours. An example of the path of the letter:
root#myhost:~# cat /var/log/mail.log | grep 7AA7935B1FC
Jan 12 11:31:03 myhost postfix/smtpd[19494]: 7AA7935B1FC:
client=localhost[127.0.0.1]
Jan 12 11:31:05 myhost postfix/cleanup[19492]: 7AA7935B1FC:
message-id=<20160112093103.7AA7935B1FC#mail.myhost.com>
Jan 12 11:31:05 myhost postfix/qmgr[7021]: 7AA7935B1FC:
from=<noreply#myhost.com>, size=1279, nrcpt=3 (queue active)
Jan 12 11:31:05 myhost lmtpunix[19631]: Delivered:
<20160112093103.7AA7935B1FC#mail.myhost.com> to mailbox:
myhost.com!user.user1
Jan 12 11:31:06 myhost postfix/lmtp[19617]: 7AA7935B1FC: to=<user1#myhost.com>, relay=mail.myhost.com[/var/lib/imap/socket/lmtp], delay=2.6, delays=2/0.01/0/0.59, dsn=4.3.0, status=deferred (host
mail.myhost.com[/var/lib/imap/socket/lmtp] said: 421 4.3.0 lmtpd:
failed to mmap /var/lib/imap/deliver.db.NEW file (in reply to end of
DATA command))
Jan 12 11:31:06 myhost postfix/lmtp[19617]: 7AA7935B1FC: to=<user2#myhost.com>, relay=mail.myhost.com[/var/lib/imap/socket/lmtp], delay=2.7, delays=2/0.01/0/0.68, dsn=4.4.2, status=deferred (lost connection with mail.myhost.com[/var/lib/imap/socket/lmtp] while sending end of data
-- message may be sent more than once
Jan 12 11:31:07 myhost postfix/lmtp[19617]: 7AA7935B1FC: to=<user3#myhost.com>, relay=mail.myhost.com[/var/lib/imap/socket/lmtp], delay=2.7, delays=2/0.01/0/0.68, dsn=4.4.2, status=deferred (lost connection with mail.myhost.com[/var/lib/imap/socket/lmtp] while sending end of data
-- message may be sent more than once)
Currently mailq features a variety of messages in queue. An example of one of these:
7BBDF35B123 6162 Tue Jan 12 13:19:24 user#rambler.ru (delivery temporarily suspended: lost connection with mail.myhost.com[/var/lib/imap/socket/lmtp] while sending end of data -- message may be sent more than once) user4#myhost.com
-- 11667 Kbytes in 327 Requests.
I think that the main reason is described here:
lmtp: failed to mmap /var/lib/imap/deliver.db.NEW file
But, unfortunately, not been able to find a solution.
The problem was solved according to this recommendation: http://lists.kolab.org/pipermail/users-de/2015-May/001998.html
Stop Services cyrus-imap and postfix
Delete files deliver.db.NEW and deliver.db in the directory /var/lib/imap/
Start the services and the file deliver.db is automatically created
Restart the queue: postsuper -r ALL
Some of the letters delivered from the queue again.
Proposed cause: after installing and start services on the new server users download messages en masse in the format *.eml, downloaded from the last post. Perhaps these actions somehow overflowed index files.
P.S.: Unfortunately, the solution was temporary: the situation described above is repeated periodically :(
I have installed K8S on OpenStack following this guide.
The installation went fine and I was able to run pods but after some time my applications stops working. I can still create pods but request won't reach the services from outside the cluster and also from within the pods. Basically, something in networking gets messed up. The iptables -L -vnt nat still shows the proper configuration but things won't work.
To make it working, I have to rebuild cluster, removing all services and replication controllers doesn't work.
I tried to look into the logs. Below is the journal for kube-proxy:
Dec 20 02:12:18 minion01.novalocal systemd[1]: Started Kubernetes Proxy.
Dec 20 02:15:52 minion01.novalocal kube-proxy[1030]: I1220 02:15:52.269784 1030 proxier.go:487] Opened iptables from-containers public port for service "default/opensips:sipt" on TCP port 5060
Dec 20 02:15:52 minion01.novalocal kube-proxy[1030]: I1220 02:15:52.278952 1030 proxier.go:498] Opened iptables from-host public port for service "default/opensips:sipt" on TCP port 5060
Dec 20 03:05:11 minion01.novalocal kube-proxy[1030]: W1220 03:05:11.806927 1030 api.go:224] Got error status on WatchEndpoints channel: &{TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:401: The event in requested index is outdated and cleared (the requested history has been cleared [1433/544]) [2432] Reason: Details:<nil> Code:0}
Dec 20 03:06:08 minion01.novalocal kube-proxy[1030]: W1220 03:06:08.177225 1030 api.go:153] Got error status on WatchServices channel: &{TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:401: The event in requested index is outdated and cleared (the requested history has been cleared [1476/207]) [2475] Reason: Details:<nil> Code:0}
..
..
..
Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: E1220 16:01:23.448570 1030 proxier.go:161] Failed to ensure iptables: error creating chain "KUBE-PORTALS-CONTAINER": fork/exec /usr/sbin/iptables: too many open files:
Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: W1220 16:01:23.448749 1030 iptables.go:203] Error checking iptables version, assuming version at least 1.4.11: %vfork/exec /usr/sbin/iptables: too many open files
Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: E1220 16:01:23.448868 1030 proxier.go:409] Failed to install iptables KUBE-PORTALS-CONTAINER rule for service "default/kubernetes:"
Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: E1220 16:01:23.448906 1030 proxier.go:176] Failed to ensure portal for "default/kubernetes:": error checking rule: fork/exec /usr/sbin/iptables: too many open files:
Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: W1220 16:01:23.449006 1030 iptables.go:203] Error checking iptables version, assuming version at least 1.4.11: %vfork/exec /usr/sbin/iptables: too many open files
Dec 20 16:01:23 minion01.novalocal kube-proxy[1030]: E1220 16:01:23.449133 1030 proxier.go:409] Failed to install iptables KUBE-PORTALS-CONTAINER rule for service "default/repo-client:"
I found few posts relating to "failed to install iptables" but they don't seem to be relevant as initially everything works but after few hours it gets messed up.
What version of Kubernetes is this? A long time ago (~1.0.4) we had a bug in the kube-proxy where it leaked sockets/file-descriptors.
If you aren't running a 1.1.3 binary, consider upgrading.
Also, you should be able to use lsof to figure out who has all of the files open.
I am currently testing the gwan web server, I have a question about the default settings for the gwan worker, and the CPU Core detection.
Running gwan on a Xen server (which contains a 4-core Xeon CPU), the gwan.log file reports that only got a single core was detected:
'./gwan -w 4' used to override 1 x 1-Core CPU(s)
[Wed May 22 06:54:29 2013 GMT] using 1 workers 0[01]0
[Wed May 22 06:54:29 2013 GMT] among 2 threads 0[11]1
[Wed May 22 06:54:29 2013 GMT] (can't use more threads than CPU Cores)
[Wed May 22 06:54:29 2013 GMT] CPU: 1x Intel(R) Xeon(R) CPU E5506 # 2.13GHz
[Wed May 22 06:54:29 2013 GMT] 0 id: 0 0
[Wed May 22 06:54:29 2013 GMT] 1 id: 1 1
[Wed May 22 06:54:29 2013 GMT] 2 id: 2 2
[Wed May 22 06:54:29 2013 GMT] 3 id: 3 3
[Wed May 22 06:54:29 2013 GMT] CPU(s):1, Core(s)/CPU:1, Thread(s)/Core:2
Any idea?
Thanks!!
the gwan.log file reports that I only got a single core:
Xen, like many other hypervisors, is breaking the CPUID instruction and the /proc/cpuinfo Linux kernel structures (both of which are used by G-WAN to detect CPU Cores).
As you have seen, this is a real problem for multithreaded applications designed to scale on multicore.
'./gwan -w 4' used to override 1 x 1-Core CPU(s)
To avoid stupid manual overrides wasting memory and impacting performance, G-WAN checks that the user-supplied thread count is not greater than the actual CPU Core count.
This is why you have the warning: "(can't use more threads than CPU Cores)".
To bypass this protection and warning, you can use the -g ("God") mode:
./gwan -g -w 4
This command line switch is documented in the G-WAN executable help:
./gwan -h
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Usage: gwan [-b -d -g -k -r -t -v -w] [argument]
(grouped options '-bd' are ignored, use '-b -d')
-b: use the TCP_DEFER_ACCEPT TCP option
(this is disabling the DoS shield!)
-d: daemon mode (uses an 'angel' process)
-d:group:user dumps 'root' privileges
-g: do not limit worker threads to Cores
(using more threads than physical Cores
will lower speed and waste resources)
-k: (gracefully) kill local gwan processes
using the pid files of the gwan folder
-r: run the specified C script (and exit)
for general-purpose code, not servlets
-t: store client requests in 'trace' file
(may be useful to debug/trace attacks)
-v: return version number and build date
-w: define the number of worker threads.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Available network interfaces (2):
127.0.0.1 192.168.0.11