Background Information
I just recently started to get these errors with a production ColdFusion 10 server:
Service Temporary Unavailable!
The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
Jakarta/ISAPI/isapi_redirector/1.2.32 ()
After spending a few hours with google I came across this thread on the adobe forums:
https://forums.adobe.com/thread/1016323?start=0&tstart=0
There is a lot of information in this thread, but I focused on two areas.
Get current usage of threads/sessions/memory metrics.
Use the metrics information to tune the ColdFusion IIS Connector configuration
My goal was to finish with this blog post:
http://blogs.coldfusion.com/post.cfm/tuning-coldfusion-10-iis-connector-configuration
The blog post was referenced in this reported bug:
https://bugbase.adobe.com/index.cfm?event=bug&id=3318104
Problem
I'm currently stuck on #1; getting current usage of threads/sessions/memory metrics.
I checked: CFadmin > Debugging & Logging > Debug Output Settings > Enable Metrics Logging.
ColdFusion 10 metrics.log:
"Information","scheduler-1","07/20/14","15:12:24",,"Max threads: null Current thread count: null Current thread busy: null Max processing time: null Request count: null Error count: null Bytes received: null Bytes sent: null Free memory: 1055964040 Total memory: 1570766848 Active Sessions: 679"
"Information","scheduler-1","07/20/14","15:13:24",,"Max threads: null Current thread count: null Current thread busy: null Max processing time: null Request count: null Error count: null Bytes received: null Bytes sent: null Free memory: 1136605536 Total memory: 1572864000 Active Sessions: 674"
"Information","scheduler-1","07/20/14","15:14:24",,"Max threads: null Current thread count: null Current thread busy: null Max processing time: null Request count: null Error count: null Bytes received: null Bytes sent: null Free memory: 939095720 Total memory: 1572339712 Active Sessions: 673"
On a ColdFusion 11 development server I turned on Enable Metrics Logging just to see what it reported. The metrics.log for that file looks like this:
"Information","scheduler-1","07/20/14","15:20:59",,"Max threads: null Current thread count: null Current thread busy: null Max processing time: null Request count: null Error count: null Bytes received: null Bytes sent: null Free memory: 589971080 Total memory: 1320157184 Active Sessions: 40"
"Information","scheduler-2","07/20/14","15:21:59",,"Max threads: null Current thread count: null Current thread busy: null Max processing time: null Request count: null Error count: null Bytes received: null Bytes sent: null Free memory: 583831160 Total memory: 1320157184 Active Sessions: 41"
"Information","scheduler-2","07/20/14","15:22:59",,"Max threads: null Current thread count: null Current thread busy: null Max processing time: null Request count: null Error count: null Bytes received: null Bytes sent: null Free memory: 907572096 Total memory: 1431830528 Active Sessions: 40"
The problem is that almost all of the information is coming back as "null".
ColdFusion 10 environment:
Windows Server 2008 R2,
ColdFusion 10 Standard 64-bit,
Java 7u60
ColdFusion 11 environment:
Windows Server 2012 R2,
ColdFusion 11 Standard 64-bit,
Java 7u65
Additional Note
This was found in the coldfusion-error.log on ColdFusion 10 (not in CF 11 though):
java.lang.NullPointerException
at coldfusion.server.jrun4.metrics.SimpleLoadMetric.run(SimpleLoadMetric.java:157)
at coldfusion.scheduling.ThreadPool.run(ThreadPool.java:211)
at coldfusion.scheduling.WorkerThread.run(WorkerThread.java:71)
The Question
Does anyone know how to get the Enable Metrics Logging to actually report threads used?
In the first link, "carl type3" posted a sample of his metrics.log file and it had all the information that I want to get.
CF Admin Metrics Settings
ColdFusion 10 workers.properties:
worker.list=cfusion
worker.cfusion.type=ajp13
worker.cfusion.host=localhost
worker.cfusion.port=8012
worker.cfusion.max_reuse_connections=250
worker.cfusion.connection_pool_size=500
worker.cfusion.connection_pool_timeout=60
ColdFusion 10 server.xml connector:
<Connector port="8012" protocol="AJP/1.3" redirectPort="8445" tomcatAuthentication="false" maxThreads="500" connectionTimeout ="60000" />
To enable metric logging, go to Debugging & Logging>Debug Output Settings and then update the values highlighted below.
The "Max threads" shows null value in your logs, which further infer, that the metric logs are not enabled. Use the Current Thread Count as an input to the connection_pool_size and then set the max_reuse_connections. You will also need to add connectionTimeout and maxThreads in server.xml, as suggested in http://blogs.coldfusion.com/post.cfm/coldfusion-11-iis-connector-tuning. This is applicable for CF10 as well.
The correct port you need to set in the CF Admin can be found in the server.xml of cfusion.
In most setups with IIS as a frontend webserver it's 8012, so you need to change this setting in the CF Admin to this value.
Restart coldfusion and you should finally see some values with cfstat as well as in the metrics log.
Related
I'm getting throttling error while getting data from Exchange online with PowerShell command and I'm having trouble understand what exactly exceeds the policy. I haven't found any documentation and all the numbers in the error message looks fine for me.
Category: 1003 Message:This operation exceeds the throttling budget for policy part 'LocalTime', policy value '3000000', Budget type: 'PowerShell'. Suggested backoff time 226271 ms.
PowerShell BudgetType: PowerShell
ActiveRunspaces: 0/20
Balance: -2960943/2160000/-3000000
PowerShellCmdletsLeft: 397/400
ExchangeCmdletsLeft: 197/200
CmdletTimePeriod: 5
DestructiveCmdletsLeft: 120/120
DestructiveCmdletTimePeriod: 60
QueueDepth: 100
MaxRunspacesTimePeriod: 60
RunSpacesRemaining: 20/20
LastTimeFrameUpdate: 6/12/2022 6:45:06 AM
LastTimeFrameUpdateDestructiveCmdlets: 6/12/2022 6:44:57 AM
LastTimeFrameUpdateMaxRunspaces: 6/12/2022 6:44:57 AM
Locked: True
LockRemaining: 00:03:46.2710274 ;
Computer: )
Reason:OverBudgetException
I want to have the policy change to avoid those issues but I'm not really sure what should be changed?
We have an Orion instance which crashes about once each day or two.
In /var/log/contextBroker/contextBroker-service.out I have found:
log directory: '/var/log/contextBroker'
*** glibc detected *** /usr/bin/contextBroker: corrupted double-linked list: 0x00007f0ed92e3f70 ***
======= Backtrace: =========
/lib64/libc.so.6(+0x75f4e)[0x7f0eecdeaf4e]
/lib64/libc.so.6(+0x763d3)[0x7f0eecdeb3d3]
/lib64/libc.so.6(+0x78c88)[0x7f0eecdedc88]
/usr/lib64/libstdc++.so.6(_ZNSsD1Ev+0x39)[0x7f0eed6404c9]
/usr/bin/contextBroker(_Z9jsonParseP14ConnectionInfoPKcRKSsP8JsonNodeP9ParseData+0x539)[0x56fb99]
/usr/bin/contextBroker(_Z9jsonTreatPKcP14ConnectionInfoP9ParseData11RequestTypeRKSsPP11JsonRequest+0x17d)[0x56cf0d]
/usr/bin/contextBroker(_Z12payloadParseP14ConnectionInfoP9ParseDataP11RestServicePP10XmlRequestPP11JsonRequestP18JsonDelayedReleaseRSt6vectorISsSaISsEE+0x3f2)[0x564012]
/usr/bin/contextBroker(_Z11restServiceP14ConnectionInfoP11RestService+0x126c)[0x5654bc]
/usr/bin/contextBroker[0x55cbb6]
/usr/bin/contextBroker[0x55f987]
/usr/lib64/libmicrohttpd.so.10(+0x5599)[0x7f0eee1cf599]
/usr/lib64/libmicrohttpd.so.10(MHD_connection_handle_idle+0x518)[0x7f0eee1d0078]
/usr/lib64/libmicrohttpd.so.10(+0xc3c8)[0x7f0eee1d63c8]
/lib64/libpthread.so.0(+0x7a51)[0x7f0eec957a51]
/lib64/libc.so.6(clone+0x6d)[0x7f0eece5d93d]
And in /var/log/contextBroker/contextBroker-service.out.old the following:
log directory: '/var/log/contextBroker'
*** glibc detected *** /usr/bin/contextBroker: free(): invalid next size (fast): 0x00007fe6d4262110 ***
======= Backtrace: =========
/lib64/libc.so.6(+0x75f4e)[0x7fe6e8e9cf4e]
/lib64/libc.so.6(+0x78cf0)[0x7fe6e8e9fcf0]
/usr/bin/contextBroker(_ZN20ContextElementVector7releaseEv+0x2fa)[0x5f4a4a]
/usr/bin/contextBroker(_Z17postUpdateContextP14ConnectionInfoiRSt6vectorISsSaISsEEP9ParseDatab+0x1472)[0x4d6692]
/usr/bin/contextBroker(_Z11restServiceP14ConnectionInfoP11RestService+0x6d6)[0x564926]
/usr/bin/contextBroker[0x55cbb6]
/usr/bin/contextBroker[0x55f987]
/usr/lib64/libmicrohttpd.so.10(+0x5599)[0x7fe6ea281599]
/usr/lib64/libmicrohttpd.so.10(MHD_connection_handle_idle+0x518)[0x7fe6ea282078]
/usr/lib64/libmicrohttpd.so.10(+0xc3c8)[0x7fe6ea2883c8]
/lib64/libpthread.so.0(+0x7a51)[0x7fe6e8a09a51]
/lib64/libc.so.6(clone+0x6d)[0x7fe6e8f0f93d]
Data is sent to the Orion in batches each 5 minutes:
a request with around 500 contextElements
a request with around 10 contextElements
a request with a single contextElements
Orion has only 2 subscriptions (AFAIK) which send the data to Proton-CEP.
The Orion version is:
[centos#orion ~]$ /usr/bin/contextBroker --version
0.25.0 (git version: a8cf800d4e9fdd7b4293a886490c40309a5bb58c)
Copyright 2013-2015 Telefonica Investigacion y Desarrollo, S.A.U
Is there anything we can do to debug the issue?
Taking into account user inputs, Orion seems to be running below the recommended CPU and RAM thresholds (see recomendations). Thus, probably with more resources (e.g. 2 vCPU and 4GM RAM) it run better, specially if MongoDB runs in the same machine that Orion (MongoDB is known to be a memory-intensive process).
I am going to set up a kafka cluster on apache mesos.
I follow the instruction at kafka-mesos on github. I installed a mesos cluster (using Mesosphere without Marathon) with 3 nodes each with 2 CPUs and 4GB memory. I tested the cluster with hello world examples successfully.
I can run kafka-mesos scheduler on it and can add brokers to it.
But when i want to start the broker, an memory limit issued appear.
broker-191-.... TASK_FAILED slave:#c3-S1 reason:REASON_MEMORY_LIMIT
Although, the cluster has 12GB memory, but broker just need 3GB memory with 1GB heap. (I test it with various configuration from 512M till 3GB, but not worked)
What is the problem? and what is the solution?
the complete story is here:
2015-10-17 17:39:24,748 [Jetty-17] INFO ly.stealth.mesos.kafka.HttpServer$ - handling - http://192.168.11.191:7000/api/broker/start
2015-10-17 17:39:28,202 [Thread-605] INFO ly.stealth.mesos.kafka.Scheduler$ - [resourceOffers]
mesos-2#O1160 cpus:2.00 mem:4098.00 disk:9869.00 ports:[31000..32000]
mesos-3#O1161 cpus:2.00 mem:4098.00 disk:9869.00 ports:[31000..32000]
mesos-1#O1162 cpus:2.00 mem:4098.00 disk:9869.00 ports:[31000..32000]
2015-10-17 17:39:28,204 [Thread-605] INFO ly.stealth.mesos.kafka.Scheduler$ - Starting broker 191: launching task broker-191-0abe9e57-b0fb-4d87-a1b4-529acb111940 by offer mesos-2#O1160
broker-191-0abe9e57-b0fb-4d87-a1b4-529acb111940 slave:#c6-S3 cpus:1.00 mem:3096.00 ports:[31000..31000] data:defaults=broker.id\=191\,log.dirs\=kafka-logs\,port\=31000\,zookeeper.connect\=192.168.11.191:2181\\\,192.168.11.192:2181\\\,192.168.11.193:2181\,host.name\=mesos-2\,log.retention.bytes\=10737418240,broker={"stickiness" : {"period" : "10m"\, "stopTime" : "2015-10-17 13:43:29.278"}\, "id" : "191"\, "mem" : 3096\, "cpus" : 1.0\, "heap" : 1024\, "failover" : {"delay" : "1m"\, "maxDelay" : "10m"}\, "active" : true}
2015-10-17 17:39:28,417 [Thread-606] INFO ly.stealth.mesos.kafka.Scheduler$ - [statusUpdate] broker-191-0abe9e57-b0fb-4d87-a1b4-529acb111940 TASK_FAILED slave:#c6-S3 reason:REASON_MEMORY_LIMIT
2015-10-17 17:39:28,418 [Thread-606] INFO ly.stealth.mesos.kafka.Scheduler$ - Broker 191 failed 1, waiting 1m, next start ~ 2015-10-17 17:40:28+03
2015-10-17 17:39:29,202 [Thread-607] INFO ly.stealth.mesos.kafka.Scheduler$ - [resourceOffers]
I found the following in Mesos master log:
...validation.cpp:422] Executor broker-191-... for task broker-191-... uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases.
...validation.cpp:434] Executor broker-191-... for task broker-191-... uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases.
but i set the CPU and MEM for brokers via broker add (update):
broker updated:
id: 191
active: false
state: stopped
resources: cpus:1.00, mem:2048, heap:1024, port:auto
failover: delay:1m, max-delay:10m
stickiness: period:10m, expires:2015-10-19 11:15:53+03
The executor doesn't get the heap setting just the broker. I opened an issue for this https://github.com/mesos/kafka/issues/137. Please increase the mem until a patch is available.
This hasn't been a problem seen I suspect because the mem gets set as a larger value (the size of your data set you don't want to hit disk from when reading) so there is page cache for max efficiencies http://kafka.apache.org/documentation.html#maximizingefficiency
I've got a server running around 500 powershell processes. Each of these processes are designed to make WMI calls across our environment. I've been careful to verify that I do not use up all of the server's available memory or CPU. When I have all 500 processes running, I'm at around 70% memory usage.
Just in case anybody is wondering how the individual processes are handled, they are executed using a gearman job worker. Basically a shell python script that calls a powershell script...times 500.
The issue i'm running into is that some of my powershell processes are crashing after running a few hours.
Some of the errors that I'm getting are:
A new guard page for the stack cannot be created
When I open event viewer, I see these events when processes crash
Fault bucket , type 0
Event Name: PowerShell
Response: Not available
Cab Id: 0
Problem signature:
P1: powershell.exe
P2: 6.3.9600.16394
P3: System.OutOfMemoryException
P4: System.OutOfMemoryException
P5: oft.PowerShell.ConsoleHost.ReportExceptionFallback
P6: lization.EncodingTable.nativeCreateOpenFileMapping
P7: Consol.. main thread
P8:
P9:
P10:
Attached files:
These files may be available here:
C:\path
Analysis symbol:
Rechecking for solution: 0
Report Id: ID
Report Status: 2048
Hashed bucket:
I'm guessing it has something to do with powershell running out of memory, but the server is not peaked, and not all processes crash, it is sporadic.
Any help would be appreciated.
Here are more crash results, the powershell fault module names are different from time to time:
Problem Event Name: APPCRASH
Application Name: powershell.exe
Application Version: 6.3.9600.16384
Application Timestamp: 52158733
Fault Module Name: ntdll.dll
Fault Module Version: 6.3.9600.16408
Fault Module Timestamp: 523d45fa
Exception Code: c00000fd
Exception Offset: 00069abb
OS Version: 6.3.9600.2.0.0.272.7
Locale ID: 1033
Additional Information 1: 624b
Additional Information 2: 624b484d3cf74536f98239c741379147
Additional Information 3: a901
Additional Information 4: a901f876e92d1eb79eb3a513defef0c6
Problem signature:
Problem Event Name: APPCRASH
Application Name: powershell.exe
Application Version: 6.3.9600.16384
Application Timestamp: 52158733
Fault Module Name: combase.dll
Fault Module Version: 6.3.9600.16408
Fault Module Timestamp: 523d3001
Exception Code: c00000fd
Exception Offset: 0001a360
OS Version: 6.3.9600.2.0.0.272.7
Locale ID: 1033
Additional Information 1: 81ca
Additional Information 2: 81cae32566783b059420874b47802c3e
Additional Information 3: b637
Additional Information 4: b6375e6f6a866fc9d00393d4649231b8
have you looked at your max memory allocation per shell?
get-item WSMan:\localhost\Shell\MaxMemoryPerShellMB
and if its too low changing this;
set-item WSMan:\localhost\Shell\MaxMemoryPerShellMB 2048
Doesn't .Net have a limit of memory?
If you're using TaskManager to check on memory usage, you might try Process Explorer instead. It sometimes gives very different results.
Thanks everyone for the responses, it turns out that I had a memory leak in my powershell code that was causing memory usage to spike every now and then. Since I was not watching the server at every second, I missed when the memory usage spiked.
An interesting note, it appears that Powershell will not use more then 80% of available memory on a server before killing its own processes.
I had to increase the available memory to 56GB and now I'm not running into any issues whatsoever. I've been running 600 powershell processes for a week now and have not had one crash on me.
In my project, I have to update documents in MongoDB many times. I find MongoDB support insert a lot of documents with insert_many use a single command, but I can't use update_many to update a lot at once, they have difference condition. I have to update them one by one.
With insert_many, I can insert more than 7000 documents per second. At the same environment, there are only about 1500 documents could be updated per second. It just seems inefficient to send thousands of commands when one will do.
Is it possible to send multiple update commands to MongoDB server at once?
Thanks for your explain #Blakes Seven, I have rewritten my program with Bulk and update documents with "unordered" operation. There is the speed report on my test environment.
1 thread: 12655 doc/s cpu: 150 - 200%
2 threads: 19005 doc/s cpu: 200 - 300%
3 threads: 24433 doc/s cpu: 300 - 400%
4 threads: 28957 doc/s cpu: 400 - 500%
5 threads: 35586 doc/s cpu: 500 - 600%
6 threads: 32942 doc/s cpu: 600+%
On my test environment, test program and MongoDB server running on the same machine, It seems not perfect for multiple threads. The CPU usage of MongoDB when run the program with a single thread, It was between 150% and 200%. MongoDB executed the operations in parallel exactly, seems have a limit of the threads with a client connect.
Anyway, a single thread is enough for me, besides, fewer thread has higher efficiency.
Another report on the online environment that client and server running on a different machine:
1 thread: 14719 doc/s
2 threads: 26837 doc/s
3 threads: 34908 doc/s
4 threads: 46151 doc/s
5 threads: 47842 doc/s
6 threads: 52522 doc/s
You can do that with
db.collection.findAndModify()
Please go through documentation :
https://docs.mongodb.com/manual/reference/method/db.collection.findAndModify/