iMac with mdnsresponder keeps waking up - macos-sierra

I am trying to figure out why my iMac 2011 wakes up so frequently (every ~2 min)
If anyone can help, I would be really greatful.
Here is my pmset log:
2017-05-03 22:14:31 +0300 Wake Requests [*proc=mDNSResponder request=Maintenance inDelta=5662]
2017-05-03 22:15:28 +0300 Assertions PID 214(mDNSResponder) Created MaintenanceWake "mDNSResponder:maintenance" 00:00:00 id:0x0xd00008556 [System: DeclUser BGTask SRPrevSleep kCPU kDisp]
2017-05-03 22:15:30 +0300 Assertions PID 214(mDNSResponder) Released MaintenanceWake "mDNSResponder:maintenance" 00:00:02 id:0x0xd00008556 [System: DeclUser BGTask kDisp]
2017-05-03 22:15:43 +0300 DarkWake DarkWake from Normal Sleep [CDN] due to GIGE/Network: Using AC 45 secs
2017-05-03 22:15:43 +0300 WakeDetails DriverReason:Enet.Service - DriverDetails:
2017-05-03 22:15:43 +0300 Kernel Client Acks Delays to Wake notifications: [PRT1 driver is slow(msg: SetState to 2)(330 ms)] [IOUSBMassStorageInterfaceNub driver is slow(msg: SetState to 1)(1106 ms)] [IOUSBMassStorageDriverNub driver is slow(msg: SetState to 1)(1175 ms)] [com_apple_driver_AppleUSBCardReaderInterfaceNub driver is slow(msg: SetState to 2)(1107 ms)] [com_apple_driver_AppleUSBCardReaderDriverNub driver is slow(msg: DidChangeState to 2)(1172 ms)] [PRT0 driver is slow(msg: SetState to 2)(14532 ms)] [AppleAHCIDiskQueueManager driver is slow(msg: SetState to 3)(319 ms)]
2017-05-03 22:16:32 +0300 Wake Requests [*proc=mDNSResponder request=Maintenance inDelta=6429]
2017-05-03 22:17:28 +0300 Assertions PID 214(mDNSResponder) Created MaintenanceWake "mDNSResponder:maintenance" 00:00:00 id:0x0xd00008563 [System: DeclUser BGTask SRPrevSleep kCPU kDisp]
2017-05-03 22:17:30 +0300 Assertions PID 214(mDNSResponder) Released MaintenanceWake "mDNSResponder:maintenance" 00:00:02 id:0x0xd00008563 [System: DeclUser kDisp]
2017-05-03 22:17:43 +0300 DarkWake DarkWake from Normal Sleep [CDN] due to GIGE/Network: Using AC 45 secs
2017-05-03 22:17:43 +0300 WakeDetails DriverReason:Enet.Service - DriverDetails:
2017-05-03 22:17:43 +0300 Kernel Client Acks Delays to Wake notifications: [PRT1 driver is slow(msg: SetState to 2)(331 ms)] [IOUSBMassStorageInterfaceNub driver is slow(msg: SetState to 1)(1102 ms)] [IOUSBMassStorageDriverNub driver is slow(msg: SetState to 1)(1169 ms)] [com_apple_driver_AppleUSBCardReaderInterfaceNub driver is slow(msg: SetState to 2)(1103 ms)] [com_apple_driver_AppleUSBCardReaderDriverNub driver is slow(msg: SetState to 2)(1179 ms)] [PRT0 driver is slow(msg: SetState to 2)(14622 ms)] [AppleAHCIDiskQueueManager driver is slow(msg: SetState to 3)(318 ms)] [IOSCSIPeripheralDeviceType00 driver is slow(msg: SetState to 3)(14316 ms)]
2017-05-03 22:18:33 +0300 Wake Requests [*proc=mDNSResponder request=Maintenance inDelta=6427]
2017-05-03 22:18:49 +0300 Assertions PID 214(mDNSResponder) Created MaintenanceWake "mDNSResponder:maintenance" 00:00:00 id:0x0xd0000856d [System: DeclUser BGTask SRPrevSleep kCPU kDisp]
2017-05-03 22:18:51 +0300 Assertions PID 214(mDNSResponder) Released MaintenanceWake "mDNSResponder:maintenance" 00:00:02 id:0x0xd0000856d [System: DeclUser BGTask IPushSrvc kDisp]
2017-05-03 22:19:03 +0300 Wake Wake from Normal Sleep [CDNVA] due to EHC2/HID Activity: Using AC 31 secs
2017-05-03 22:19:03 +0300 Kernel Client Acks Delays to Wake notifications: [PRT1 driver is slow(msg: SetState to 2)(332 ms)] [AppleHDADriver driver is slow(msg: SetState to 1)(473 ms)] [AppleHDADriver driver is slow(msg: SetState to 1)(402 ms)] [IOUSBMassStorageInterfaceNub driver is slow(msg: SetState to 1)(1064 ms)] [com_apple_driver_AppleUSBCardReaderInterfaceNub driver is slow(msg: SetState to 2)(1069 ms)] [com_apple_driver_AppleUSBCardReaderDriverNub driver is slow(msg: SetState to 2)(1136 ms)] [IOUSBMassStorageDriverNub driver is slow(msg: SetState to 1)(1143 ms)] [PRT0 driver is slow(msg: SetState to 2)(14250 ms)] [AppleAHCIDiskQueueManager driver is slow(msg: SetState to 3)(318 ms)]
Sleep/Wakes since boot at 2017-04-30 08:42:37 +0300 :418 Dark Wake Count in this sleep cycle:2
2017-05-03 22:19:34 +0300 Sleep Entering DarkWake state due to 'Software Sleep pid=126': Using AC
2017-05-03 22:19:53 +0300 Wake Requests [*proc=mDNSResponder request=Maintenance inDelta=6429]
2017-05-03 22:20:35 +0300 Assertions PID 214(mDNSResponder) Created MaintenanceWake "mDNSResponder:maintenance" 00:00:00 id:0x0xd000085be [System: DeclUser BGTask SRPrevSleep kCPU kDisp]
2017-05-03 22:20:37 +0300 Assertions PID 214(mDNSResponder) Released MaintenanceWake "mDNSResponder:maintenance" 00:00:02 id:0x0xd000085be [System: DeclUser kDisp]
2017-05-03 22:20:50 +0300 Wake Wake from Normal Sleep [CDNVA] due to GIGE/UserActivity Assertion: Using AC
2017-05-03 22:20:50 +0300 WakeDetails DriverReason:Enet.Service - DriverDetails:
2017-05-03 22:20:50 +0300 Kernel Client Acks Delays to Wake notifications: [PRT1 driver is slow(msg: SetState to 2)(331 ms)] [com_apple_driver_AppleUSBCardReaderInterfaceNub driver is slow(msg: SetState to 2)(1106 ms)] [com_apple_driver_AppleUSBCardReaderDriverNub driver is slow(msg: DidChangeState to 2)(1169 ms)] [IOUSBMassStorageInterfaceNub driver is slow(msg: SetState to 1)(1112 ms)] [IOUSBMassStorageDriverNub driver is slow(msg: SetState to 1)(1178 ms)] [AppleHDADriver driver is slow(msg: SetState to 1)(474 ms)] [AppleHDADriver driver is slow(msg: SetState to 1)(400 ms)] [PRT0 driver is slow(msg: SetState to 2)(14526 ms)] [AppleAHCIDiskQueueManager driver is slow(msg: SetState to 3)(319 ms)] [IOSCSIPeripheralDeviceType00 driver is slow(msg: SetState to 4)(14300 ms)]
Total Sleep/Wakes since boot at 2017-04-30 08:42:37 +0300 :419
and the packet capture of the ethernet device:

Related

celery on kubernetes execute task 15 minutes after receive

try to migrate my app django/celery in nomad(hashicorp) to Kubernetes, and jobs with #shared_task() it's executed after 15 min at receiving message
I don't see anything in stats or status, Redis connection is OK
I see the task in flower, but it remains started during 15min
Received 2021-09-28 20:30:56.387649 UTC
Started 2021-09-28 20:30:56.390532 UTC
Succeeded 2021-09-28 20:46:00.556030 UTC
Received 2021-09-28 21:18:43.436750 UTC
Started 2021-09-28 21:18:43.441041 UTC
Succeeded 2021-09-28 21:33:49.391542 UTC
Celery version is 4.4.2
Any resolution to this problem?
fixed, it's based on redis key cache with setex
thanks

Apache Geode Native Client logs show a connection pool error on starting native client

We run a native client and I have noticed a Failed to add endpoint to pool error in the cache server logs when the client is started.
I setup the logs using:
CacheFactory cacheFactory = new CacheFactory();
return cacheFactory
.Set("log-file", "Geode.log")
.Set("log-level", "ALL")
.Set("name", "Dealer")
.SetPdxReadSerialized(true)
.Create();
The Geode.log file shows the following:
[info 2020/09/03 12:40:57.906591 GMT Daylight Time ARGO:15580 11876] ClientMetadataService started for pool MyPool2
[debug 2020/09/03 12:40:57.986018 GMT Daylight Time ARGO:15580 25428] SerializationRegistry::deserialize typeId = -1 dsCode = 1
[debug 2020/09/03 12:40:57.986095 GMT Daylight Time ARGO:15580 25428] closing the connection locator1
[debug 2020/09/03 12:40:57.986117 GMT Daylight Time ARGO:15580 25428] closing the connection locator
[fine 2020/09/03 12:40:57.986205 GMT Daylight Time ARGO:15580 25428] Created new endpoint 1.2.3.4:40404 for pool MyPool2
[error 2020/09/03 12:40:57.986256 GMT Daylight Time ARGO:15580 25428] Failed to add endpoint 1.2.3.4:40404 to pool MyPool2
[debug 2020/09/03 12:40:57.986285 GMT Daylight Time ARGO:15580 25428] ThinClientRedundancyManager::maintainRedundancyLevel(): checking redundant list, size = 0
[debug 2020/09/03 12:40:57.986303 GMT Daylight Time ARGO:15580 25428] ThinClientRedundancyManager::maintainRedundancyLevel(): finding nonredundant endpoints, size = 1
[fine 2020/09/03 12:40:57.986321 GMT Daylight Time ARGO:15580 25428] Recovering subscriptions on endpoint [1.2.3.4:40404] from pool MyPool2
[fine 2020/09/03 12:40:57.986339 GMT Daylight Time ARGO:15580 25428] TcrEndpoint::createNewConnection: connectTimeout = m_needToConnectInLock=59000000 appThreadRequest =0
[debug 2020/09/03 12:40:57.986361 GMT Daylight Time ARGO:15580 25428] Tcrconnection const isSecondary = 0 and isClientNotification = 0, this = 00000202EDBAD790, conn ref to endopint 1
[finest 2020/09/03 12:40:57.986438 GMT Daylight Time ARGO:15580 25428] Using socket send buffer size of 64240.
[finest 2020/09/03 12:40:57.986465 GMT Daylight Time ARGO:15580 25428] Using socket receive buffer size of 64240.
[debug 2020/09/03 12:40:57.986482 GMT Daylight Time ARGO:15580 25428] Creating plain socket stream
Can someone explain why we see the error here? The code that is executed is at ThinClientPoolDM.cpp but the error does not seem to make any difference to the client, which we can see does make a connection. Though the server endpoint does not appear to be added to a pool in the error state we can see a fine message almost immediately after saying recovering subscriptions on endpoint and it's the same endpoint.
There was a longstanding bug in this code causing it to log a failure here on success and vice-versa, which is most likely what you're hitting. This was fixed as part of PR #588 for GEODE-7930, on 4/6/2020. Please see if you have this fix in your local repo and reply if you do and are still hitting the issue.

Pipelinedb gets stuck under high load, worker process eats 100% CPU doing nothing

Note: I am looking for any hints on how to debug this issue, not necessarily a direct answer to this specific problem.
I am measuring the performance of PipelineDB for use in our system.
I have defined a few continuous views (Calculating sums, top-K and such), feeding from a single stream (Which has ~20 columns, some text, mostly integers and booleans).
The test is written in Python, and I am using the psycopg2 cursor.copy_from() function, to achieve max performance.
PipelineDB behaves nicely when there the work specified by the continuous views is not too complicated.
However, when I ask it to calculate many top-K results, or many percentile_cont() values, the test hangs with the following symptoms:
The (single) 'worker0' process starts eating 100% CPU
The input process shows that it is running the COPY command, never changing to IDLE (During normal work, it changes between COPY and IDLE).
Test hangs (i.e. the copy_from() function call does not return)
Below is the output of a 'ps -ef' command showing all pipelinedb processes, after about a minute or running the test. Note that the worker0 process is consuming 100% CPU since the beginning of the test. It never resumes normal work ('top' shows that it is consuming exactly 100% CPU)
Test logs show that it is running OK for the first ~1 second, inserting ~30,000 events (In batches of 100), and then it hangs, because a call to the copy_from() function does not return.
When I reduce the amount of work PipelineDB has (By removing some of the continuous views), the test works OK, achieving up to 20,000 inserts per second, sustained for at least one minute.
I would like to note that all events have the same time stamp, and all views have a "GROUP BY minute" clause, hence a single row should be created/updated in every continuous view, during the test.
I have played with some configuration parameters, specifically those related to memory buffer sizes, sync methods, time intervals, max_wait and such, amount of workers, and could not find any combination that avoids the problem.
I do not know if I am hitting a PipelineDB issue, or a PostgreSQL issue.
Certainly it is not expected behavior, and cannot be tolerated in a real application.
Any hints, guesses, gut feelings etc. are welcome.
[orens#rd10 ~]$ps -ef | grep pipelinedb
UID PID PPID C STIME TTY TIME CMD
orens 3005 3004 0 11:17 ? 00:00:00 pipelinedb: logger process
orens 3007 3004 0 11:17 ? 00:00:00 pipelinedb: checkpointer process
orens 3008 3004 0 11:17 ? 00:00:00 pipelinedb: writer process
orens 3009 3004 0 11:17 ? 00:00:00 pipelinedb: wal writer process
orens 3010 3004 0 11:17 ? 00:00:00 pipelinedb: autovacuum launcher process
orens 3011 3004 0 11:17 ? 00:00:00 pipelinedb: stats collector process
orens 3012 3004 0 11:17 ? 00:00:00 pipelinedb: pipelinedb scheduler process
orens 3014 3004 0 11:17 ? 00:00:00 pipelinedb: bgworker: reaper0 [pipeline]
orens 3015 3004 0 11:17 ? 00:00:00 pipelinedb: bgworker: queue0 [pipeline]
orens 3016 3004 0 11:17 ? 00:00:00 pipelinedb: bgworker: combiner1 [pipeline]
orens 3017 3004 0 11:17 ? 00:00:00 pipelinedb: bgworker: combiner0 [pipeline]
orens 3018 3004 0 11:17 ? 00:00:00 pipelinedb: bgworker: worker0 [pipeline]
orens 3046 3004 0 11:17 ? 00:00:00 pipelinedb: bgworker: reaper0 [db1]
orens 3050 3004 0 11:17 ? 00:00:00 pipelinedb: bgworker: queue0 [db1]
orens 3052 3004 0 11:17 ? 00:00:00 pipelinedb: bgworker: combiner0 [db1]
orens 3056 3004 90 11:17 ? 00:01:06 pipelinedb: bgworker: worker0 [db1]
orens 3132 3004 1 11:17 ? 00:00:01 pipelinedb: ut_user db1 ::1(58830) COPY
[orens#rd10 ~]$

unregistering the JDBC driver?

I'm running tomcat under/inside of eclipse while developing a web application. The web app is using hsqldb in embedded mode, via hibernate and guice. Things seem to be working fine, except when I stop tomcat. Eclipse has a green start button and a red stop button for Tomcat. When I click on the stop button it doesn't immediately stop it nicely like it did before I added hibernate and hsqldb to the mix. Now it waits a few seconds and then eclipse gives me a dialog box about not being able to stop tomcat and to click OK to force it to terminate.
Does anyone know what I need to do to fix this? I found some other responses saying to put the hsqldb jar file in tomcat's lib directory but I was wondering there is anything I could do that's a little less drastic.
Here's what's in the error output from tomcat (in the eclipse console window):
Jan 31, 2017 7:04:11 PM org.apache.catalina.loader.WebappClassLoaderBase clearReferencesJdbc
WARNING: The web application [basic] registered the JDBC driver [org.hsqldb.jdbc.JDBCDriver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.
Jan 31, 2017 7:04:11 PM org.apache.catalina.loader.WebappClassLoaderBase clearReferencesThreads
WARNING: The web application [basic] appears to have started a thread named [HSQLDB Timer #276f0355] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.lang.Object.wait(Native Method)
org.hsqldb.lib.HsqlTimer$TaskQueue.park(Unknown Source)
org.hsqldb.lib.HsqlTimer.nextTask(Unknown Source)
org.hsqldb.lib.HsqlTimer$TaskRunner.run(Unknown Source)
java.lang.Thread.run(Thread.java:745)
Jan 31, 2017 7:04:11 PM org.apache.catalina.loader.WebappClassLoaderBase clearReferencesThreads
WARNING: The web application [basic] appears to have started a thread named [pool-1-thread-1] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Jan 31, 2017 7:04:11 PM org.apache.catalina.loader.WebappClassLoaderBase checkThreadLocalMapForLeaks
SEVERE: The web application [basic] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal#1d4fc7e8]) and a value of type [org.hibernate.internal.SessionImpl] (value [SessionImpl(PersistenceContext[entityKeys=[],collectionKeys=[]];ActionQueue[insertions=ExecutableList{size=0} updates=ExecutableList{size=0} deletions=ExecutableList{size=0} orphanRemovals=ExecutableList{size=0} collectionCreations=ExecutableList{size=0} collectionRemovals=ExecutableList{size=0} collectionUpdates=ExecutableList{size=0} collectionQueuedOps=ExecutableList{size=0} unresolvedInsertDependencies=null])]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
I have a stupid answer but I'll show you what I did.
#JulienR: I already had the shutdown=true in my persistence.xml file for the javax.persistence.jdbc.url value.
I created a ServletContextListener and added it to my web.xml file; here's the code. The first part that uses the EntityManager to issue the SHUTDOWN command is my code. The code for the ClassLoader and driver eyeballing I got off of here. So with this it's no longer complaining about HSQLDB but I'm still getting a WARNING about two threads that weren't stopped (and I have to wait for eclipse to time out waiting for it). I've appended the relevant lines from the log.
public class BasicServletContextListener implements ServletContextListener {
private final transient Logger log =
LoggerFactory.getLogger(BasicServletContextListener.class);
// private final Provider<EntityManager> entityManagerProvider;
#Override
public void contextDestroyed(ServletContextEvent event) {
this.log.debug("contextEvent: {}", event.toString());
final EntityManagerFactory entityManagerFactory =
Persistence.createEntityManagerFactory("ilmp");
final EntityManager entityManager =
entityManagerFactory.createEntityManager();
entityManager.getTransaction().begin();
final Query query =
entityManager.createNativeQuery("SHUTDOWN COMPACT;");
this.log.debug("query: {}", query.executeUpdate());
entityManager.getTransaction().commit();
entityManager.close();
// Now deregister JDBC drivers in this context's ClassLoader:
// Get the webapp's ClassLoader
final ClassLoader cl = Thread.currentThread().getContextClassLoader();
// Loop through all drivers
final Enumeration<Driver> drivers = DriverManager.getDrivers();
while (drivers.hasMoreElements()) {
final Driver driver = drivers.nextElement();
if (driver.getClass().getClassLoader() == cl) {
// This driver was registered by the webapp's
// ClassLoader, so deregister it:
try {
this.log.info("Deregistering JDBC driver {}", driver);
DriverManager.deregisterDriver(driver);
}
catch (final SQLException ex) {
this.log.error("Error deregistering JDBC driver {}", driver,
ex);
}
}
else {
// driver was not registered by the webapp's
// ClassLoader and may be in use elsewhere
this.log.trace(
"Not deregistering JDBC driver {} as it does not belong to this webapp's ClassLoader",
driver);
}
}
}
#Override
public void contextInitialized(ServletContextEvent event) {
this.log.debug("contextEvent: {}", event.toString());
}
}
>
INFO: 2017-Feb-01 19:05:36.023 [localhost-startStop-2] org.hibernate.hql.internal.QueryTranslatorFactoryInitiator.initiateService.47: HHH000397: Using ASTQueryTranslatorFactory
Hibernate: SHUTDOWN COMPACT;
INFO: 2017-Feb-01 19:05:36.226 [localhost-startStop-2] sun.reflect.NativeMethodAccessorImpl.invoke0.-2: Database closed
INFO: 2017-Feb-01 19:05:36.273 [localhost-startStop-2] sun.reflect.NativeMethodAccessorImpl.invoke0.-2: open start - state not modified
INFO: 2017-Feb-01 19:05:36.351 [localhost-startStop-2] sun.reflect.NativeMethodAccessorImpl.invoke0.-2: Database closed
DEBUG: 2017-Feb-01 19:05:36.460 [localhost-startStop-2] com.objecteffects.basic.persist.BasicServletContextListener.contextDestroyed.42: query: 0
INFO: 2017-Feb-01 19:05:36.460 [localhost-startStop-2] com.objecteffects.basic.persist.BasicServletContextListener.contextDestroyed.59: Deregistering JDBC driver org.hsqldb.jdbc.JDBCDriver#3cd32e8d
Feb 01, 2017 7:05:36 PM org.apache.catalina.loader.WebappClassLoaderBase clearReferencesThreads
WARNING: The web application [basic] appears to have started a thread named [pool-1-thread-1] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Feb 01, 2017 7:05:36 PM org.apache.catalina.loader.WebappClassLoaderBase clearReferencesThreads
WARNING: The web application [basic] appears to have started a thread named [pool-2-thread-1] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Feb 01, 2017 7:05:36 PM org.apache.catalina.loader.WebappClassLoaderBase checkThreadLocalMapForLeaks
SEVERE: The web application [basic] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal#1bc6eba0]) and a value of type [org.hibernate.internal.SessionImpl] (value [SessionImpl(PersistenceContext[entityKeys=[EntityKey[com.objecteffects.basic.persist.TumblrSecretsEntity#1], EntityKey[com.objecteffects.basic.persist.TumblrSecretsEntity#2]],collectionKeys=[]];ActionQueue[insertions=ExecutableList{size=0} updates=ExecutableList{size=0} deletions=ExecutableList{size=0} orphanRemovals=ExecutableList{size=0} collectionCreations=ExecutableList{size=0} collectionRemovals=ExecutableList{size=0} collectionUpdates=ExecutableList{size=0} collectionQueuedOps=ExecutableList{size=0} unresolvedInsertDependencies=null])]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
Feb 01, 2017 7:05:36 PM org.apache.coyote.AbstractProtocol stop
INFO: Stopping ProtocolHandler ["http-nio-8080"]
Feb 01, 2017 7:05:36 PM org.apache.coyote.AbstractProtocol stop
INFO: Stopping ProtocolHandler ["ajp-nio-8009"]
Feb 01, 2017 7:05:36 PM org.apache.coyote.AbstractProtocol destroy
INFO: Destroying ProtocolHandler ["http-nio-8080"]
Feb 01, 2017 7:05:36 PM org.apache.coyote.AbstractProtocol destroy
INFO: Destroying ProtocolHandler ["ajp-nio-8009"]
The stupid answer is that I looked at my setup for something I wrote several years ago (I'm retired and just dinking around trying to write something for myself) and adding the c3p0 config lines I'd used previously to this persistence.xml makes it shut down without the delay, although I still get a bunch of warnings about zombie threads. Here are the relevant lines (still commented out).
<!-- <property -->
<!-- name="hibernate.c3p0.min_size" -->
<!-- value="5" /> -->
<!-- <property -->
<!-- name="hibernate.c3p0.max_size" -->
<!-- value="20" /> -->
<!-- <property -->
<!-- name="hibernate.c3p0.timeout" -->
<!-- value="1800" /> -->
<!-- <property -->
<!-- name="hibernate.c3p0.max_statements" -->
<!-- value="50" /> -->

Is there a way to use a sql profiler for nmemory (an in-memory database)

I'm using Entity Framework with Effort that uses NMemory to test without having actual database side-effects. Is there any way to view the sql that's being sent to the nmemory database?
Edit:
Thanks to #Gert_Arnold I have been looking in to DbContext.Database.Log. Unfortunately my output looks like below. Can anyone comment on this? I'm assuming I'm getting these null entries instead of my sql.
Opened connection at 4/27/2015 11:08:22 AM -05:00
Started transaction at 4/27/2015 11:08:22 AM -05:00
<null>
-- Executing at 4/27/2015 11:08:23 AM -05:00
-- Completed in 132 ms with result: 1
<null>
-- Executing at 4/27/2015 11:08:23 AM -05:00
-- Completed in 5 ms with result: 1
Committed transaction at 4/27/2015 11:08:23 AM -05:00
Closed connection at 4/27/2015 11:08:23 AM -05:00
Disposed transaction at 4/27/2015 11:08:23 AM -05:00
Opened connection at 4/27/2015 11:08:24 AM -05:00
Started transaction at 4/27/2015 11:08:24 AM -05:00
<null>
-- Executing at 4/27/2015 11:08:24 AM -05:00
-- Completed in 8 ms with result: 1
Committed transaction at 4/27/2015 11:08:24 AM -05:00
Closed connection at 4/27/2015 11:08:24 AM -05:00
Disposed transaction at 4/27/2015 11:08:24 AM -05:00
You can intercept and log the commands.
// Before command is sent tell EF about the new interceptor
DbInterception.Add(new MyEFDbInterceptor());
// The interceptor class is called by , see the various interface methods
// just a couple shown here.
public class MyEFDbInterceptor: IDbCommandInterceptor {
public void ReaderExecuting(DbCommand command, DbCommandInterceptionContext<DbDataReader> interceptionContext) {
Debug.Writeln(command.CommandText );
//Debug.Writeln(interceptionContext.Result ); // might be interesting use
}
public void ReaderExecuted(DbCommand command, DbCommandInterceptionContext<DbDataReader> interceptionContext)
{
Debug.Writeln(command.CommandText );
}
}