I am using tornado to implement the server with web sockets. I have some multi core CPU and I want to use the other CPU as well. So I though of using python multiprocess module. I want to accept the connection on the main process and send the data using other process. My questions are:
Is it possible to share the socket information between processes?
Is it better to use pickling or is there any other method that I can use?
If I use pickling the additional duplicates file descriptors that are created by it will affect the number of file descriptors the OS can handle or is it the same file descriptor shared between the processes?
Explanation:
There will be a lot of incoming connections and there will be a lot of messages from the client side so I do not want to the main event to loop to be stuck in sending the data. That is why I am trying to use different process to send the data to the connections.
Output of strace
I have started strace and given the process id from which I am sending data to web sockets. The output of strace looks like this:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
91.01 0.019570 0 441736 sendto
3.60 0.000774 0 29314 read
3.14 0.000675 0 30623 clock_gettime
1.15 0.000248 0 2909 write
0.96 0.000206 0 11855 epoll_wait
0.13 0.000029 0 1534 680 recvfrom
0.00 0.000000 0 17 open
0.00 0.000000 0 34 close
0.00 0.000000 0 17 stat
0.00 0.000000 0 17 fstat
0.00 0.000000 0 34 poll
0.00 0.000000 0 39 mmap
0.00 0.000000 0 26 munmap
0.00 0.000000 0 408 brk
0.00 0.000000 0 134 ioctl
0.00 0.000000 0 34 socket
0.00 0.000000 0 34 17 connect
0.00 0.000000 0 300 setsockopt
0.00 0.000000 0 17 getsockopt
0.00 0.000000 0 200 fcntl
0.00 0.000000 0 17 gettimeofday
0.00 0.000000 0 1185 epoll_ctl
0.00 0.000000 0 178 78 accept4
------ ----------- ----------- --------- --------- ----------------
100.00 0.021502 520662 775 total
Is there any reason that i am getting error recvfrom and connect?
No, Tornado does not support this. There are techniques like SCM_RIGHTS to transfer file descriptors to other processes, but this will give you a raw socket in the other process, not a Tornado websocket object (and there is no supported way to construct a websocket object for this socket).
The recommended approach with Tornado is to run one process per CPU and let them share traffic by either putting them behind a load balancer or using SO_REUSEPORT. Sending the data in Tornado is non-blocking; you must make sure that your own code is non-blocking too (using asynchronous interfaces or thread pools).
I will answer to the 1st question:
is it possible to share the socket information between processes?
May depend on OS, but with Linux it is possible at least 2 ways:
When the main process accepts a new TCP connection, it can fork a new child process for handling it. After fork, the new child process will have the same socket file descriptor than the main process.
Use a UNIX domain socket to pass file descriptor of a socket from the main process to the other process. This requires to use of SCM_RIGHTS control message and ancillary data. Check this.
Related
I have something like this
outputs = Parallel(n_jobs=12, verbose=10)(delayed(_process_article)(article, config) for article in data)
Case 1: Run on ubuntu with 80 cores:
CPU(s): 80
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
There are a total of 90,000 tasks. At around 67k it fails and is terminated.
joblib.externals.loky.process_executor.BrokenProcessPool: A process in the executor was terminated abruptly, the pool is not usable anymore.
When I monitor the top at 67k I see a sharp fall in the memory
top - 11:40:25 up 2 days, 18:35, 4 users, load average: 7.09, 7.56, 7.13
Tasks: 32 total, 3 running, 29 sleeping, 0 stopped, 0 zombie
%Cpu(s): 7.6 us, 2.6 sy, 0.0 ni, 89.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 33554432 total, 40 free, 33520996 used, 33396 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 40 avail Mem
Case 2: Mac with 8 cores
hw.physicalcpu: 4
hw.logicalcpu: 8
But on the mac it is much much slower .. And surprisingly it does not get killed at 67k..
Additionally, I reduced the parallelism (in case 1) to 2,4 and it still fails :(
Why is this happening? Has anyone faced this issue before and has a fix?
Note: when I run for 50,000 tasks it runs well and does not give any problems.
Thank you!
Got a machine with an increased memory of 128GB and that solved the problem!
My company is considering a self-hosted option for a combination of JIRA, Confluence and MySQL running behind an nginx proxy. We are a very small team of 5, and expect extremely mild usage for now. I hardly even expect any concurrent usage at this point.
I am a bit puzzled by the various guidelines posted by Atlassian:
https://confluence.atlassian.com/enterprise/jira-sizing-guide-461504623.html
https://confluence.atlassian.com/adminjiraserver075/jira-applications-installation-requirements-935390824.html
https://confluence.atlassian.com/doc/example-size-and-hardware-specifications-from-customer-survey-76840961.html
https://confluence.atlassian.com/doc/server-hardware-requirements-guide-30736403.html
It seems they don't want to bother providing actual minimum hardware requirements. For example, on the same page they could say "minimum heap size to allocate to Confluence is 1 GB and 1 GB for Synchrony (which is required for collaborative editing)" and also that " minimum hardware recommendation" is 6GB. The leap from 1 required plus 1 optional to 6 recommended minimum is bizarre, to say the least.
I think what I want to know is whether I will be able to fit this setup into a 2GB RAM machine or a 4GB RAM machine (both dual CPU).
OK, I have done a test with following configuration:
VM with 2 cores capped at ~2.2Ghz and 4GB RAM
Ubuntu 16.04 server
Docker and docker-compose
Containers:
nginx
jwilder/docker-gen
jrcs/letsencrypt-nginx-proxy-companion
cptactionhank/atlassian-jira-software
cptactionhank/atlassian-confluence
mysql
This 4GB RAM machine is barely capable of running this setup:
$ free -m
total used free shared buff/cache available
Mem: 3951 3553 107 0 291 157
Swap: 974 725 249
CPU usage was going up to 200% only during initialisation when JIRA and Confluence started with empty home dirs. The following top output is after:
creating a space and a page in Confluence
and a project with ~10 issues in JIRA
and linking JIRA and Confluence together
$ top -o %MEM | head -15
top - 16:14:33 up 6:12, 2 users, load average: 0.15, 0.04, 0.01
Tasks: 132 total, 1 running, 131 sleeping, 0 stopped, 0 zombie
%Cpu(s): 2.6 us, 0.5 sy, 0.0 ni, 95.8 id, 1.0 wa, 0.0 hi, 0.1 si, 0.0 st
KiB Mem : 4046364 total, 128808 free, 3638444 used, 279112 buff/cache
KiB Swap: 998396 total, 252956 free, 745440 used. 161144 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6328 bin 20 0 3306232 1.468g 0 S 0.0 38.1 12:03.27 java
6418 bin 20 0 2860000 1.320g 0 S 0.0 34.2 10:56.24 java
7205 bin 20 0 2807088 476592 1724 S 0.0 11.8 1:58.37 java
5752 999 20 0 1815480 99804 4728 S 0.0 2.5 1:11.29 mysqld
1070 root 20 0 621908 28672 8904 S 0.0 0.7 0:30.74 dockerd
1179 root 20 0 623004 7536 2520 S 0.0 0.2 0:16.66 docker-containe
968 root 20 0 291352 6536 1912 S 0.0 0.2 0:00.77 snapd
8310 root 20 0 15388 5064 3056 S 0.0 0.1 0:21.39 docker-gen
Confluence also allocated ~500MB RAM to Synchrony:
$ ps aux --sort -rss | head -4
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
bin 6328 3.3 38.3 3306232 1551120 ? Ssl 10:14 12:12 /usr/lib/jvm/java-1.8-openjdk/bin/java -Djava.util.logging.config.file=/opt/atlassian/confluence...
bin 6418 2.9 34.1 2860000 1382868 ? Ssl 10:14 10:57 /usr/lib/jvm/java-1.8-openjdk/bin/java -Djava.util.logging.config.file=/opt/atlassian/jira/...
bin 7205 0.5 11.7 2807088 476588 ? Sl 10:44 1:59 /usr/lib/jvm/java-1.8-openjdk/jre/bin/java -classpath /opt/atlassian/confluence/temp/... synchrony.core sql
During JIRA and Confluence install stage, MySQL peaked at around 500MB RAM usage, and during normal operation it sits around 100MB.
In my attempts, a 2GB machine was only enough to run either JIRA or Confluence without MySQL.
Conclusion:
It looks like 4GB RAM Dual core machine is the absolute minimum required for JIRA+Confluence+MySQL. But keep in mind that such a machine is barely enough for a practically empty project.
I personally was not expecting these applications to be that RAM hungry being empty.
I developed a video streaming server in golang which receives multiple streams via a regular TCP connection and broadcasts them over HTTP.
This is working on my raspberryPi cluster which I am running in docker swarm. The server is horizontally distributed among all raspberries. All instances receive all streams, the load distribution comes when distributing among many clients over HTTP.
A single stream is pretty heavy, about 16Mbit/s of bandwidth used.
It works pretty well as I get 32fps almost without delay.
I decided to check how much CPU and memory my programs are consuming via htop. When I run it as the root user I see my server using around 50% CPU, which I thought was ok since it was receiving 4 simultaneous streams at 32fps. But then I ran htop with the pi user and noticed another process ksoftirqd consuming another 50% CPU (don't know why I can't see process when running htop as root).
I did some research and learned this is a special little process that handles delayed system interrupts (or something similar). This kind of makes sense since I'm receiving heavy load via the eth0 interface.
I then looked into /proc/interrupts:
CPU0 CPU1 CPU2 CPU3
16: 0 0 0 0 bcm2836-timer 0 Edge arch_timer
17: 92241367 299499641 352012196 317363321 bcm2836-timer 1 Edge arch_timer
23: 288978 0 0 0 ARMCTRL-level 1 Edge 3f00b880.mailbox
24: 25146874 0 0 0 ARMCTRL-level 2 Edge VCHIQ doorbell
46: 0 0 0 0 ARMCTRL-level 48 Edge bcm2708_fb dma
48: 0 0 0 0 ARMCTRL-level 50 Edge DMA IRQ
50: 0 0 0 0 ARMCTRL-level 52 Edge DMA IRQ
51: 530982 0 0 0 ARMCTRL-level 53 Edge DMA IRQ
54: 13573 0 0 0 ARMCTRL-level 56 Edge DMA IRQ
55: 0 0 0 0 ARMCTRL-level 57 Edge DMA IRQ
56: 0 0 0 0 ARMCTRL-level 58 Edge DMA IRQ
59: 3354 0 0 0 ARMCTRL-level 61 Edge bcm2835-auxirq
62: 3696376760 0 0 0 ARMCTRL-level 64 Edge dwc_otg, dwc_otg_pcd, dwc_otg_hcd:usb1
79: 0 0 0 0 ARMCTRL-level 81 Edge 3f200000.gpio:bank0
80: 0 0 0 0 ARMCTRL-level 82 Edge 3f200000.gpio:bank1
83: 0 0 0 0 ARMCTRL-level 85 Edge 3f804000.i2c
84: 0 0 0 0 ARMCTRL-level 86 Edge 3f204000.spi
86: 460008 0 0 0 ARMCTRL-level 88 Edge mmc0
87: 5270 0 0 0 ARMCTRL-level 89 Edge uart-pl011
92: 4975621 0 0 0 ARMCTRL-level 94 Edge mmc1
220: 3352 0 0 0 bcm2835-auxirq 0 Edge serial
FIQ: usb_fiq
IPI0: 0 0 0 0 CPU wakeup interrupts
IPI1: 0 0 0 0 Timer broadcast interrupts
IPI2: 41355699 278691553 311249136 291769396 Rescheduling interrupts
IPI3: 7758 9176 8710 9334 Function call interrupts
IPI4: 0 0 0 0 CPU stop interrupts
IPI5: 1387927 1201820 2860486 1356720 IRQ work interrupts
IPI6: 0 0 0 0 completion interrupts
Err: 0
As you can see interrupt 62 is the one that is called the most by far and only being handled by CPU0. I suspect that this interrupt is the eth0 interrupt (how can I confirm this?). So I thought I could configure this so the interrupt would be handled by all CPUs.
So I looked into /proc/irq/62/smp_affinity_list and it's contents are 0-3 which seems correct and the /proc/irq/62/smp_affinity file contents are f which also seems correct.
So I don't understand why interrupt 62 is only being handled by CPU0.
Also don't understand why I can only see the ksoftirqd in the htop if I run it as pi instead of root (the owner of the process is root).
How can I configure the raspberry to handle that interrupt on all CPUs?
* EDIT *
I was checking for this on only one of the raspberries belonging to the cluster. Checking for this on the other hosts I saw similar results (just a little lower CPU usage). Strangely on another raspberry there is no ksoftirqd process consuming that much CPU.
Customers are reporting problems almost every day on about the same hours. This app is running on 2 nodes. It is Metastorm BPM platform and it's calling our code.
In some dumps I noticed very long running threads (~50 minutes) but not in all of them. Administrators are also telling me that just before users report problems memory usage goes up. Then everything slows down to the point they can't work and admins have to restart platforms on both nodes. My first thought was deadlocks (long running threads) but didn't manage to confirm that. !syncblk isn't returning anything. Then I looked at memory usage. I noticed a lot of dynamic assemblies so thought maybe assemblies leak. But it looks it's not that. I have received dump from day where everything was working fine and number of dynamic assemblies is similar. So maybe memory leak I thought. But also cannot confirm that. !dumpheap -stat shows memory usage grows but I haven't found anything interesting with !gcroot. But there is one thing I don't know what it is. Threadpool Completion Port. There's a lot of them. So maybe sth is waiting on sth? Here is data I can give You so far that will fit in this post. Could You suggest anything that could help diagnose this situation?
Users not reporting problems:
Node1 Node2
Size of dump: 638MB 646MB
DynamicAssemblies 259 265
GC Heaps: 37MB 35MB
Loader Heaps: 11MB 11MB
Node1:
Number of Timers: 12
CPU utilization 2%
Worker Thread: Total: 5 Running: 0 Idle: 5 MaxLimit: 2000 MinLimit: 200
Completion Port Thread:Total: 2 Free: 2 MaxFree: 16 CurrentLimit: 4 MaxLimit: 1000 MinLimit: 8
!dumpheap -stat (biggest)
0x793041d0 32,664 2,563,292 System.Object[]
0x79332b9c 23,072 3,485,624 System.Int32[]
0x79330a00 46,823 3,530,664 System.String
0x79333470 22,549 4,049,536 System.Byte[]
Node2:
Number of Timers: 12
CPU utilization 0%
Worker Thread: Total: 7 Running: 0 Idle: 7 MaxLimit: 2000 MinLimit: 200
Completion Port Thread:Total: 3 Free: 1 MaxFree: 16 CurrentLimit: 5 MaxLimit: 1000 MinLimit: 8
!dumpheap -stat
0x793041d0 30,678 2,537,272 System.Object[]
0x79332b9c 21,589 3,298,488 System.Int32[]
0x79333470 21,825 3,680,000 System.Byte[]
0x79330a00 46,938 5,446,576 System.String
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Users start to report problems:
Node1 Node2
Size of dump: 662MB 655MB
DynamicAssemblies 236 235
GC Heaps: 159MB 113MB
Loader Heaps: 10MB 10MB
Node1:
Work Request in Queue: 0
Number of Timers: 14
CPU utilization 20%
Worker Thread: Total: 7 Running: 0 Idle: 7 MaxLimit: 2000 MinLimit: 200
Completion Port Thread:Total: 48 Free: 1 MaxFree: 16 CurrentLimit: 49 MaxLimit: 1000 MinLimit: 8
!dumpheap -stat
0x7932a208 88,974 3,914,856 System.Threading.ReaderWriterLock
0x79333054 71,397 3,998,232 System.Collections.Hashtable
0x24f70350 319,053 5,104,848 Our.Class
0x79332b9c 53,190 6,821,588 System.Int32[]
0x79333470 52,693 6,883,120 System.Byte[]
0x79333150 72,900 11,081,328 System.Collections.Hashtable+bucket[]
0x793041d0 247,011 26,229,980 System.Object[]
0x79330a00 644,807 34,144,396 System.String
Node2:
Work Request in Queue: 1
Number of Timers: 17
CPU utilization 17%
Worker Thread: Total: 6 Running: 0 Idle: 6 MaxLimit: 2000 MinLimit: 200
Completion Port Thread:Total: 48 Free: 2 MaxFree: 16 CurrentLimit: 49 MaxLimit: 1000 MinLimit: 8
!dumpheap -stat
0x7932a208 76,425 3,362,700 System.Threading.ReaderWriterLock
0x79332b9c 42,417 5,695,492 System.Int32[]
0x79333150 41,172 6,451,368 System.Collections.Hashtable+bucket[]
0x79333470 44,052 6,792,004 System.Byte[]
0x793041d0 175,973 18,573,780 System.Object[]
0x79330a00 397,361 21,489,204 System.String
Edit:
I downloaded debugdiag and let it analyze my dumps. Here is part of output:
The following threads in process_name name_of_dump.dmp are making a COM call to thread 193 within the same process which in turn is waiting on data to be returned from another server via WinSock.
The call to WinSock originated from 0x0107b03b and is destined for port xxxx at IP address xxx.xxx.xxx.xxx
( 18 76 172 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 210 211 212 213 214 215 216 217 218 224 225 226 227 228 229 231 232 233 236 239 )
14,79% of threads blocked
And the recommendation is:
Several threads making calls to the same STA thread can cause a performance bottleneck due to serialization. Server side COM servers are recommended to be thread aware and follow MTA guidelines when multiple threads are sharing the same object instance.
I checked using windbg what thread 193 does. It is calling our code. Our code is calling some Metastorm engine code and it hangs on some remoting call. But !runaway shows it is hanging for 8 seconds. So not that long. So I checked what are those waiting threads. All except thread 18 are:
System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32, UInt32, System.Threading.NativeOverlapped*) I could understand one, but why so many of them. Is it specific to business process modeling engine we're using or is it something typical? I guess it's taking threads that could be used by other clients and that's why the slowdown reported by users. Are those threads Completion Port Threads I asked about before? Can I do anything more to diagnose or did I found our code to be the cause?
From the looks of the output most of the memory is not on the .net heaps (only 35 MB out of ~650) so if you are looking at the .net heaps I think you are looking in the wrong place. The memory is probably either in assemblies or in native memory if you are using some native component for file transfers or similar. You would want to use Debug Diag to monitor that.
It is hard to say if you are leaking dynamic assemblies without looking at the pattern of growth so I would suggest for that that you look at perfmon and #current assemblies to see if it keeps growing over time, if it does then you would have to investigate that further by looking at what the dynamic assemblies are with !dda
I've been using prstat and prstat -m a lot to investigate performance issues lately, and I think I've basically understood the differences of sampling vs. microstate accounting in Solaris 10. So I don't expect both to always show the exactly same number.
Today I came across an occasion where the 2 showed so vastly different outputs, that I have problems interpreting them and making sense of the output. The machine is a heavily loaded 8-CPU Solaris 10, with several large WebSphere processes and an Oracle database. The system practically came to a halt today for about 15 minutes (load averages of >700). I had difficulties to get any prstat information, but was able to get some outputs from "prtstat 1 1" and "prtstat -m 1 1", issued shortly one after another.
The top lines of the outputs:
prstat 1 1:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
8379 was 3208M 2773M cpu5 60 0 5:29:13 19% java/145
7123 was 3159M 2756M run 59 0 5:26:45 7.7% java/109
5855 app1 1132M 26M cpu2 60 0 0:01:01 7.7% java/18
16503 monitor 494M 286M run 59 19 1:01:08 7.1% java/106
7112 oracle 15G 15G run 59 0 0:00:10 4.5% oracle/1
7124 oracle 15G 15G cpu3 60 0 0:00:10 4.5% oracle/1
7087 app1 15G 15G run 58 0 0:00:09 4.0% oracle/1
7155 oracle 96M 6336K cpu1 60 0 0:00:07 3.6% oracle/1
...
Total: 495 processes, 4581 lwps, load averages: 74.79, 35.35, 23.8
prstat -m 1 1 (some seconds later)
PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/NLWP
7087 app1 0.1 56 0.0 0.2 0.4 0.0 13 30 96 2 33 0 oracle/1
7153 oracle 0.1 53 0.0 3.2 1.1 0.0 1.0 42 82 0 14 0 oracle/1
7124 oracle 0.1 47 0.0 0.2 0.2 0.0 0.0 52 77 2 16 0 oracle/1
7112 oracle 0.1 47 0.0 0.4 0.1 0.0 0.0 52 79 1 16 0 oracle/1
7259 oracle 0.1 45 9.4 0.0 0.3 0.0 0.1 45 71 2 32 0 oracle/1
7155 oracle 0.0 42 11 0.0 0.5 0.0 0.1 46 90 1 9 0 oracle/1
7261 oracle 0.0 37 9.5 0.0 0.3 0.0 0.0 53 61 1 17 0 oracle/1
7284 oracle 0.0 32 5.9 0.0 0.2 0.0 0.1 62 53 1 21 0 oracle/1
...
Total: 497 processes, 4576 lwps, load averages: 88.86, 39.93, 25.51
I have a very hard time interpreting the output. prstat seems to tell me that a fair amount of Java processing is going on, together with some Oracle stuff, just as I would expect in normal situation. prtstat -m shows a machine totally dominated by Oracle, consuming huge amounts of system time, and the overall CPU being heavily overloaded (large numbers in LAT).
I'm inclined to believe the output of prstat -m, because that matches much mor closely to what the system felt like during this time. Totally sluggish, almost no more user request processing going on from WebSphere, etc. But why would prstat show so heavily differing numbers?
Any explanation of this would be welcome!
CU, Joe
There's a known problem with prstat -m on Solaris in the way cpu usage figures are calculated - the value you see has been averaged over all threads (LWPs) in a process, and hence is far far too low for heavily multithreaded processes - such as your Java app servers, which can have hundreds of threads each (see your NLWP). Less than a dozen of them are probably CPU hogs hence the CPU usage by java looks "low". You'd need to call it with prstat -Lm to get the per-LWP (thread) breakdown to see that effect. Reference:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6780169
Without further performance monitoring data it's hard to give non-speculative explanations of what you've seen there. I assume lock contention within java. One particular workload that can cause this is heavily multithreaded memory mapped I/O, they'll all pile up on the process address space lock. But it could be a purely java userside lock of course. A plockstat on one of the java processes, and/or simple dtrace profiling would be helpful.