Understanding locust summary result - locust

I have a problem to understanding the locust result as this is the first time load test my server, I ran locust using command line on 00:00 local time with; 1000 total user , 100 hatch per second and 10000 request. Below are the result
Name # reqs # fails Avg Min Max | Median req/s
--------------------------------------------------------------------------------------------------------------------------------------------
GET /api/v0/business/result/22918 452 203(30.99%) 9980 2830 49809 | 6500 1.70
GET /api/v0/business/result/36150 463 229(33.09%) 10636 2898 86221 | 7000 1.50
GET /api/v0/business/result/55327 482 190(28.27%) 10401 3007 48228 | 7000 1.60
GET /api/v0/business/result/69274 502 203(28.79%) 9882 2903 48435 | 6800 1.50
GET /api/v0/business/result/71704 469 191(28.94%) 10714 2748 62271 | 6900 1.70
POST /api/v0/business/query 2268 974(30.04%) 10528 2938 55204 | 7100 7.10
GET /api/v0/suggestions/query/?q=na 2361 1013(30.02%) 10775 2713 63359 | 6800 7.80
--------------------------------------------------------------------------------------------------------------------------------------------
Total 6997 3003(42.92%) 22.90
Percentage of the requests completed within given times
Name # reqs 50% 66% 75% 80% 90% 95% 98% 99% 100%
--------------------------------------------------------------------------------------------------------------------------------------------
GET /api/v0/business/result/22918 452 6500 8300 11000 13000 20000 35000 37000 38000 49809
GET /api/v0/business/result/36150 463 7000 9400 12000 14000 21000 35000 37000 38000 86221
GET /api/v0/business/result/55327 482 7000 9800 12000 13000 21000 34000 38000 39000 48228
GET /api/v0/business/result/69274 502 6800 9000 11000 12000 20000 35000 37000 38000 48435
GET /api/v0/business/result/71704 469 6900 9500 11000 13000 21000 36000 38000 40000 62271
POST /api/v0/business/query 2268 7100 9600 12000 13000 21000 35000 37000 38000 55204
GET /api/v0/suggestions/query/?q=na 2361 6800 9900 12000 14000 22000 35000 37000 39000 63359
--------------------------------------------------------------------------------------------------------------------------------------------
Error report
# occurences Error
--------------------------------------------------------------------------------------------------------------------------------------------
80 GET /api/v0/business/result/71704: "HTTPError('502 Server Error: Bad Gateway',)"
111 GET /api/v0/business/result/71704: "HTTPError('504 Server Error: Gateway Time-out',)"
134 GET /api/v0/business/result/22918: "HTTPError('504 Server Error: Gateway Time-out',)"
69 GET /api/v0/business/result/22918: "HTTPError('502 Server Error: Bad Gateway',)"
92 GET /api/v0/business/result/69274: "HTTPError('502 Server Error: Bad Gateway',)"
594 GET /api/v0/suggestions/query/?q=na: "HTTPError('504 Server Error: Gateway Time-out',)"
111 GET /api/v0/business/result/69274: "HTTPError('504 Server Error: Gateway Time-out',)"
419 GET /api/v0/suggestions/query/?q=na: "HTTPError('502 Server Error: Bad Gateway',)"
69 GET /api/v0/business/result/55327: "HTTPError('502 Server Error: Bad Gateway',)"
121 GET /api/v0/business/result/55327: "HTTPError('504 Server Error: Gateway Time-out',)"
397 POST /api/v0/business/query: "HTTPError('502 Server Error: Bad Gateway',)"
145 GET /api/v0/business/result/36150: "HTTPError('504 Server Error: Gateway Time-out',)"
577 POST /api/v0/business/query: "HTTPError('504 Server Error: Gateway Time-out',)"
84 GET /api/v0/business/result/36150: "HTTPError('502 Server Error: Bad Gateway',)"
--------------------------------------------------------------------------------------------------------------------------------------------
here is that I confused about :
what is the meaning of the numbers below #reqs, #fails, Avg, and all number after the name on first and second table? is it to show the total request has been sent or the n-th request sent ?
at the Error Report below # occurences, does total number represent number of request that cause the error ?
thanks for your answer

First table shows the Statistic related to each row with given column explanation in millisecond but total raw shows the total number for each given column. However in your example, there is problem with the calculation of perfcentage of faliure for each raw. For the first raw: 452 requests are sent but 203 of them are failed which means 203/453 ~= 44.81% thbut in the total raw it is calculated correctly.
The second table is the distribution table which shows the percentage of request completed given time interval which in table means that 50% of the total requests to home is completed 6500ms and 66% of requests are completed in 8300ms and respectively goes on.

Related

Spring boot admin – high CPU usage on client

I have one application which uses Spring Boot Admin Server library (v2.2.1) to monitor the other applications with integrated Spring Boot Client library (v2.2.1). It works very well; server tracks status of client applications with low performance impact.
Nevertheless, when I open page Insight -> Details (default page), CPU usage of client application grows to 90 – 100 % which causes that my application (running on 1CPU system) responds very slowly. Other pages of Spring Boot Server are OK.
According to my observation, high CPU usage is caused by frequent refresh (1 second) of information there, especially charts. In my case sends Spring Boot Admin about 16 requests in 1 second. Processing each takes about 400 ms (last number)
2020-03-11 08:31:45.290 127.0.0.1 - GET "/actuator/metrics/cache.gets?tag=name:ui-logbook,result:miss" 200 HTTP/1.0 308 421
2020-03-11 08:31:45.291 127.0.0.1 - GET "/actuator/metrics/cache.gets?tag=name:area-services,result:miss" 200 HTTP/1.0 312 438
2020-03-11 08:31:45.291 127.0.0.1 - GET "/actuator/metrics/cache.gets?tag=name:area-services,result:hit" 200 HTTP/1.0 286 437
2020-03-11 08:31:45.290 127.0.0.1 - GET "/actuator/metrics/cache.size?tag=name:area-services" 200 HTTP/1.0 314 420
2020-03-11 08:31:45.292 127.0.0.1 - GET "/actuator/metrics/cache.gets?tag=name:ui-logbook,result:hit" 200 HTTP/1.0 282 428
2020-03-11 08:31:45.426 127.0.0.1 - GET "/actuator/metrics/cache.size?tag=name:ui-logbook" 200 HTTP/1.0 310 100
2020-03-11 08:31:46.513 127.0.0.1 - GET "/actuator/metrics/jvm.threads.peak" 200 HTTP/1.0 219 436
2020-03-11 08:31:46.520 127.0.0.1 - GET "/actuator/metrics/jvm.threads.live" 200 HTTP/1.0 215 434
2020-03-11 08:31:46.520 127.0.0.1 - GET "/actuator/metrics/process.cpu.usage" 200 HTTP/1.0 207 434
2020-03-11 08:31:46.520 127.0.0.1 - GET "/actuator/metrics/system.cpu.usage" 200 HTTP/1.0 177 433
2020-03-11 08:31:46.521 127.0.0.1 - GET "/actuator/metrics/jvm.gc.pause" 200 HTTP/1.0 401 433
2020-03-11 08:31:46.945 127.0.0.1 - GET "/actuator/metrics/jvm.threads.daemon" 200 HTTP/1.0 179 398
2020-03-11 08:31:46.991 127.0.0.1 - GET "/actuator/metrics/jvm.memory.max?tag=area:heap" 200 HTTP/1.0 282 425
2020-03-11 08:31:46.998 127.0.0.1 - GET "/actuator/metrics/jvm.memory.max?tag=area:nonheap" 200 HTTP/1.0 369 420
2020-03-11 08:31:46.998 127.0.0.1 - GET "/actuator/metrics/jvm.memory.used?tag=area:nonheap" 200 HTTP/1.0 318 422
2020-03-11 08:31:46.999 127.0.0.1 - GET "/actuator/metrics/jvm.memory.used?tag=area:heap" 200 HTTP/1.0 233 420
Is there any way how to reduce refresh rate in order to reduce CPU load on client system?
So far I have found only this answer but suggested solution did not work.
Thanks for sharing ideas.

bootloader lock failure standard unlocking commands not working thinking its hash failure or sst key corrupted

I do not know why but when y friend gets inebriated she like to hook her phone up to a PC and play with it. she has a basic knowledge of ADB and fastboot commmand and i verified with her what was thrown. When she went to re-lock the bootloader it did not with thisI did. she downloaded Google minimal sdk tools to get the updated ADB and Fastboot then went all the way and got mfastboot from Motorola to insure parsing for flashing. All of these fastboot packages were also tested on Mac and Linux Ubuntu, on Windows 8.1 Pro N Update 1 and Windows 7 Professional N SP2 (all x64). Resulted in the same errors. She is super thorough and I only taught here how to manually erase and flash no scripts or tool kits.
fastboot oem lock
and it returned.
(bootloader) FAIL: Please run fastboot oem lock begin first!
(bootloader) sst lock failure!
FAILED (remote failure)
finished. total time: 0.014s
Then tried again, then again, and then yep again. At this this point she either read the log and followed it. personally though I think based on the point she starts playing with phones it more likely she started to panic because she needs the bootloader locked for work and started attempting to flash.
fastboot oem lock begin
and it returned.
M:\SHAMU\FACTORY IMAGE\shamu-lmy47z>fastboot oem lock begin
...
(bootloader) Ready to flash signed images
OKAY [ 0.121s]
finished. total time: 0.123s
FACTORY IMAGE\shamu-lmy47z>fastboot flash boot boot.img
target reported max download size of 536870912 bytes
sending 'boot' (7731 KB)...
OKAY [ 0.252s]
writing 'boot'...
(bootloader) Preflash validation failed
FAILED (remote failure)
finished. total time: 0.271s
Then the bootloader log stated
cmd: oem lock
hab check failed for boot
failed to validate boot image
upon flashing boot.img the Bootloader Logs lists "Mismatched partition size (boot)".
intresting sometimes it returns
fastboot oem lock begin
...
(bootloader) Ready to flash signed images
OKAY [ 0.121s]
finished. total time: 0.123s
fastboot flash boot boot.img
target reported max download size of 536870912 bytes
sending 'boot' (7731 KB)...
OKAY [ 0.252s]
writing 'boot'...
(bootloader) Preflash validation failed
FAILED (remote failure)
finished. total time: 0.271s
I logged the partitions to see if they are zeroed out indicating bad emmc but they are not.
cat /proc/partitions
cat /proc/partitions
major minor #blocks name
179 0 61079552 mmcblk0
179 1 114688 mmcblk0p1
179 2 16384 mmcblk0p2
179 3 384 mmcblk0p3
179 4 56 mmcblk0p4
179 5 16 mmcblk0p5
179 6 32 mmcblk0p6
179 7 1024 mmcblk0p7
179 8 256 mmcblk0p8
179 9 512 mmcblk0p9
179 10 500 mmcblk0p10
179 11 4156 mmcblk0p11
179 12 384 mmcblk0p12
179 13 1024 mmcblk0p13
179 14 256 mmcblk0p14
179 15 512 mmcblk0p15
179 16 500 mmcblk0p16
179 17 4 mmcblk0p17
179 18 512 mmcblk0p18
179 19 1024 mmcblk0p19
179 20 1024 mmcblk0p20
179 21 1024 mmcblk0p21
179 22 1024 mmcblk0p22
179 23 16384 mmcblk0p23
179 24 16384 mmcblk0p24
179 25 2048 mmcblk0p25
179 26 32768 mmcblk0p26
179 27 256 mmcblk0p27
179 28 32 mmcblk0p28
179 29 128 mmcblk0p29
179 30 8192 mmcblk0p30
179 31 1024 mmcblk0p31
259 0 2528 mmcblk0p32
259 1 1 mmcblk0p33
259 2 8 mmcblk0p34
259 3 16400 mmcblk0p35
259 4 9088 mmcblk0p36
259 5 16384 mmcblk0p37
259 6 262144 mmcblk0p38
259 7 65536 mmcblk0p39
259 8 1024 mmcblk0p40
259 9 2097152 mmcblk0p41
259 10 58351488 mmcblk0p42
179 32 4096 mmcblk0rpmb
254 0 58351488 dm-0
Ive asked for log or the total process to see the full warning, error, and failure message but she is super far on business. From what I do have and what literature i have started to crack. I am starting to believe from all my research and learnng about the android boot proccess. Maybe there is a missing or corrupted key in the SST table which is I beleieved called the bigtable to google. or a hash password failure when locking the bootloader security down or i could be way off please let me know. What I do not know how to investigate or disprove this issue to move on. Would I be able to get confirmation through a stack trace for missing or corrupted coding. So then it can be a puzzle thats solved. Honestly though this has become a puzzle that begs to be solved not an emergency thanks.
You should try "fastboot flashing lock" command instead.

“can't find element ... of the list ... which is only of length” error in Netlogo

I'm working on my thesis about solving a Traveling salesman problem with genetic algorithm. I use netlogo to solve the problem. But i got this error :
Can't find element 62 of the list
[7400 5100 5000 5000 2100 4300 5200 1200 900 4300 6000 6000 7600 5900 7600
8600 7400 7100 6800 8100 3300 1400 1200 10400 8500 3700 11400 6900 2000 650
0 3000 4900 9800 10600 4000 5200 7700 8500 5900 5000 7100 6100 6800 1000
3200 2700 2900 1800 1300 9600 4800 4600 6700 7700 6100 4200 3200 9000 8200
10500 13400],
which is only of length 62.
error while turtle 2 running ITEM
called by procedure CALCULATE-DISTANCE
called by procedure SETUP_1
called by Button 'setup 1'
and i dont know why. Can someone help me about this?

Mongod resident memory usage low

I'm trying to debug some performance issues with a MongoDB configuration, and I noticed that the resident memory usage is sitting very low (around 25% of the system memory) despite the fact that there are occasionally large numbers of faults occurring. I'm surprised to see the usage so low given that MongoDB is so memory dependent.
Here's a snapshot of top sorted by memory usage. It can be seen that no other process is using an significant memory:
top - 21:00:47 up 136 days, 2:45, 1 user, load average: 1.35, 1.51, 0.83
Tasks: 62 total, 1 running, 61 sleeping, 0 stopped, 0 zombie
Cpu(s): 13.7%us, 5.2%sy, 0.0%ni, 77.3%id, 0.3%wa, 0.0%hi, 1.0%si, 2.4%st
Mem: 1692600k total, 1676900k used, 15700k free, 12092k buffers
Swap: 917500k total, 54088k used, 863412k free, 1473148k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2461 mongodb 20 0 29.5g 564m 492m S 22.6 34.2 40947:09 mongod
20306 ubuntu 20 0 24864 7412 1712 S 0.0 0.4 0:00.76 bash
20157 root 20 0 73352 3576 2772 S 0.0 0.2 0:00.01 sshd
609 syslog 20 0 248m 3240 520 S 0.0 0.2 38:31.35 rsyslogd
20304 ubuntu 20 0 73352 1668 872 S 0.0 0.1 0:00.00 sshd
1 root 20 0 24312 1448 708 S 0.0 0.1 0:08.71 init
20442 ubuntu 20 0 17308 1232 944 R 0.0 0.1 0:00.54 top
I'd like to at least understand why the memory isn't being better utilized by the server, and ideally to learn how to optimize either the server config or queries to improve performance.
UPDATE:
It's fair that the memory usage looks high, which might lead to the conclusion it's another process. There's no other processes using any significant memory on the server; the memory appears to be consumed in the cache, but I'm not clear why that would be the case:
$free -m
total used free shared buffers cached
Mem: 1652 1602 50 0 14 1415
-/+ buffers/cache: 172 1480
Swap: 895 53 842
UPDATE:
You can see that the database is still page faulting:
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn set repl time
0 402 377 0 1167 446 0 24.2g 51.4g 3g 0 <redacted>:9.7% 0 0|0 1|0 217k 420k 457 mover PRI 03:58:43
10 295 323 0 961 592 0 24.2g 51.4g 3.01g 0 <redacted>:10.9% 0 14|0 1|1 228k 500k 485 mover PRI 03:58:44
10 240 220 0 698 342 0 24.2g 51.4g 3.02g 5 <redacted>:10.4% 0 0|0 0|0 164k 429k 478 mover PRI 03:58:45
25 449 359 0 981 479 0 24.2g 51.4g 3.02g 32 <redacted>:20.2% 0 0|0 0|0 237k 503k 479 mover PRI 03:58:46
18 469 337 0 958 466 0 24.2g 51.4g 3g 29 <redacted>:20.1% 0 0|0 0|0 223k 500k 490 mover PRI 03:58:47
9 306 238 1 759 325 0 24.2g 51.4g 2.99g 18 <redacted>:10.8% 0 6|0 1|0 154k 321k 495 mover PRI 03:58:48
6 301 236 1 765 325 0 24.2g 51.4g 2.99g 20 <redacted>:11.0% 0 0|0 0|0 156k 344k 501 mover PRI 03:58:49
11 397 318 0 995 395 0 24.2g 51.4g 2.98g 21 <redacted>:13.4% 0 0|0 0|0 198k 424k 507 mover PRI 03:58:50
10 544 428 0 1237 532 0 24.2g 51.4g 2.99g 13 <redacted>:15.4% 0 0|0 0|0 262k 571k 513 mover PRI 03:58:51
5 291 264 0 878 335 0 24.2g 51.4g 2.98g 11 <redacted>:9.8% 0 0|0 0|0 163k 330k 513 mover PRI 03:58:52
It appears this was being caused by a large amount of inactive memory on the server that wasn't be cleared for Mongo's use.
By looking at the result from:
cat /proc/meminfo
I could see a large amount of Inactive memory. Using this command as a sudo user:
free && sync && echo 3 > /proc/sys/vm/drop_caches && echo "" && free
Freed up the inactive memory, and over the next 24 hours I was able to see the resident memory of my Mongo instance increasing to consume the rest of the memory available on the server.
Credit to the following blog post for it's instructions:
http://tinylan.com/index.php/article/how-to-clear-inactive-memory-in-linux
MongoDB only uses as much memory as it needs, so if all of the data and indexes that are in MongoDB can fit inside what it's currently using you won't be able to push that anymore.
If the data set is larger than memory, there are a couple of considerations:
Check MongoDB itself to see how much data it thinks its using by running mongostat and looking at resident-memory
Was MongoDB re/started recently? If it's cold then the data won't be in memory until it gets paged in (leading to more page faults initially that gradually settle). Check out the touch command for more information on "warming MongoDB up"
Check your read ahead settings. If your system read ahead is too high then MongoDB can't efficiently use the memory on the system. For MongoDB a good number to start with is a setting of 32 (that's 16 KB of read ahead assuming you have 512 byte blocks)
I had the same issue: Windows Server 2008 R2, 16 Gb RAM, Mongo 2.4.3. Mongo uses only 2 Gb of RAM and generates a lot of page faults. Queries are very slow. Disk is idle, memory is free. Found no other solution than upgrade to 2.6.5. It helped.

Help me analyze dump file

Customers are reporting problems almost every day on about the same hours. This app is running on 2 nodes. It is Metastorm BPM platform and it's calling our code.
In some dumps I noticed very long running threads (~50 minutes) but not in all of them. Administrators are also telling me that just before users report problems memory usage goes up. Then everything slows down to the point they can't work and admins have to restart platforms on both nodes. My first thought was deadlocks (long running threads) but didn't manage to confirm that. !syncblk isn't returning anything. Then I looked at memory usage. I noticed a lot of dynamic assemblies so thought maybe assemblies leak. But it looks it's not that. I have received dump from day where everything was working fine and number of dynamic assemblies is similar. So maybe memory leak I thought. But also cannot confirm that. !dumpheap -stat shows memory usage grows but I haven't found anything interesting with !gcroot. But there is one thing I don't know what it is. Threadpool Completion Port. There's a lot of them. So maybe sth is waiting on sth? Here is data I can give You so far that will fit in this post. Could You suggest anything that could help diagnose this situation?
Users not reporting problems:
Node1 Node2
Size of dump: 638MB 646MB
DynamicAssemblies 259 265
GC Heaps: 37MB 35MB
Loader Heaps: 11MB 11MB
Node1:
Number of Timers: 12
CPU utilization 2%
Worker Thread: Total: 5 Running: 0 Idle: 5 MaxLimit: 2000 MinLimit: 200
Completion Port Thread:Total: 2 Free: 2 MaxFree: 16 CurrentLimit: 4 MaxLimit: 1000 MinLimit: 8
!dumpheap -stat (biggest)
0x793041d0 32,664 2,563,292 System.Object[]
0x79332b9c 23,072 3,485,624 System.Int32[]
0x79330a00 46,823 3,530,664 System.String
0x79333470 22,549 4,049,536 System.Byte[]
Node2:
Number of Timers: 12
CPU utilization 0%
Worker Thread: Total: 7 Running: 0 Idle: 7 MaxLimit: 2000 MinLimit: 200
Completion Port Thread:Total: 3 Free: 1 MaxFree: 16 CurrentLimit: 5 MaxLimit: 1000 MinLimit: 8
!dumpheap -stat
0x793041d0 30,678 2,537,272 System.Object[]
0x79332b9c 21,589 3,298,488 System.Int32[]
0x79333470 21,825 3,680,000 System.Byte[]
0x79330a00 46,938 5,446,576 System.String
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Users start to report problems:
Node1 Node2
Size of dump: 662MB 655MB
DynamicAssemblies 236 235
GC Heaps: 159MB 113MB
Loader Heaps: 10MB 10MB
Node1:
Work Request in Queue: 0
Number of Timers: 14
CPU utilization 20%
Worker Thread: Total: 7 Running: 0 Idle: 7 MaxLimit: 2000 MinLimit: 200
Completion Port Thread:Total: 48 Free: 1 MaxFree: 16 CurrentLimit: 49 MaxLimit: 1000 MinLimit: 8
!dumpheap -stat
0x7932a208 88,974 3,914,856 System.Threading.ReaderWriterLock
0x79333054 71,397 3,998,232 System.Collections.Hashtable
0x24f70350 319,053 5,104,848 Our.Class
0x79332b9c 53,190 6,821,588 System.Int32[]
0x79333470 52,693 6,883,120 System.Byte[]
0x79333150 72,900 11,081,328 System.Collections.Hashtable+bucket[]
0x793041d0 247,011 26,229,980 System.Object[]
0x79330a00 644,807 34,144,396 System.String
Node2:
Work Request in Queue: 1
Number of Timers: 17
CPU utilization 17%
Worker Thread: Total: 6 Running: 0 Idle: 6 MaxLimit: 2000 MinLimit: 200
Completion Port Thread:Total: 48 Free: 2 MaxFree: 16 CurrentLimit: 49 MaxLimit: 1000 MinLimit: 8
!dumpheap -stat
0x7932a208 76,425 3,362,700 System.Threading.ReaderWriterLock
0x79332b9c 42,417 5,695,492 System.Int32[]
0x79333150 41,172 6,451,368 System.Collections.Hashtable+bucket[]
0x79333470 44,052 6,792,004 System.Byte[]
0x793041d0 175,973 18,573,780 System.Object[]
0x79330a00 397,361 21,489,204 System.String
Edit:
I downloaded debugdiag and let it analyze my dumps. Here is part of output:
The following threads in process_name name_of_dump.dmp are making a COM call to thread 193 within the same process which in turn is waiting on data to be returned from another server via WinSock.
The call to WinSock originated from 0x0107b03b and is destined for port xxxx at IP address xxx.xxx.xxx.xxx
( 18 76 172 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 210 211 212 213 214 215 216 217 218 224 225 226 227 228 229 231 232 233 236 239 )
14,79% of threads blocked
And the recommendation is:
Several threads making calls to the same STA thread can cause a performance bottleneck due to serialization. Server side COM servers are recommended to be thread aware and follow MTA guidelines when multiple threads are sharing the same object instance.
I checked using windbg what thread 193 does. It is calling our code. Our code is calling some Metastorm engine code and it hangs on some remoting call. But !runaway shows it is hanging for 8 seconds. So not that long. So I checked what are those waiting threads. All except thread 18 are:
System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32, UInt32, System.Threading.NativeOverlapped*) I could understand one, but why so many of them. Is it specific to business process modeling engine we're using or is it something typical? I guess it's taking threads that could be used by other clients and that's why the slowdown reported by users. Are those threads Completion Port Threads I asked about before? Can I do anything more to diagnose or did I found our code to be the cause?
From the looks of the output most of the memory is not on the .net heaps (only 35 MB out of ~650) so if you are looking at the .net heaps I think you are looking in the wrong place. The memory is probably either in assemblies or in native memory if you are using some native component for file transfers or similar. You would want to use Debug Diag to monitor that.
It is hard to say if you are leaking dynamic assemblies without looking at the pattern of growth so I would suggest for that that you look at perfmon and #current assemblies to see if it keeps growing over time, if it does then you would have to investigate that further by looking at what the dynamic assemblies are with !dda