If a thread or AsyncTask is started by an Activity, will the thread or AsyncTask be killed if the Activity is destroyed? - android-activity

If a thread or AsyncTask is started by an Activity, will the thread or AsyncTask be killed as well when the Activity is destroyed?

Related

IJ000607/IJ031011 connection active locks errors on transaction reaper thread in JBoss EAP

I am getting this error in my keycloak pod.
How can i fix it?
IJ000607/IJ031011 connection active locks errors on transaction reaper thread in JBoss EAP

Xcode crashes when adding a Swift package

When adding a swift package to a repo adding the package causes a crash.
Have you seen such a failure? What did you do?
Here is the stack trace:
PlugIn Identifier: libSwiftPM.dylib
PlugIn Version: ??? (17700)
Date/Time: 2021-04-17 14:02:53.780 +0100
OS Version: macOS 11.3 (20E5231a)
Report Version: 12
Bridge OS Version: 5.3 (18P54555a)
Anonymous UUID: E0A63D54-3439-6D85-7A26-1B18A623DBA6
Sleep/Wake UUID: 48028EAA-522A-4F9B-A1DB-08E369AB0C6B
Time Awake Since Boot: 94000 seconds
Time Since Wake: 44000 seconds
System Integrity Protection: enabled
Crashed Thread: 13 Dispatch queue: -[IDEExecutionEnvironment initWithWorkspaceArena:] (QOS: UNSPECIFIED)
Exception Type: EXC_BAD_INSTRUCTION (SIGILL)
Exception Codes: 0x0000000000000001, 0x0000000000000000
Exception Note: EXC_CORPSE_NOTIFY
Termination Signal: Illegal instruction: 4
Termination Reason: Namespace SIGNAL, Code 0x4
Terminating Process: exc handler [49193]
Application Specific Information:
ProductBuildVersion: 12D4e
Workspace/Workspace.swift:1161: Fatal error: 'try!' expression unexpectedly raised an error: TSCBasic.GraphError.unexpectedCycle
In my case it was because I had the package before and I deleted it, but it's still in Derived Data. Clearing Derived data in /Users/<USER>/Library/Developer/Xcode/DerivedData/, then adding the swift package again fixes it.

ERROR ContextCleaner: Error in cleaning thread

I have a project with spark 1.4.1 and scala 2.11, when I run it with sbt run ( sbt 0.13.12) it display an error is the following:
16/12/22 15:36:43 ERROR ContextCleaner: Error in cleaning thread
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:175)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1249)
at org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:172)
at org.apache.spark.ContextCleaner$$anon$1.run(ContextCleaner.scala:67)
16/12/22 15:36:43 ERROR Utils: uncaught error in thread SparkListenerBus, stopping SparkContext
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:996)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303)
at java.util.concurrent.Semaphore.acquire(Semaphore.java:317)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(LiveListenerBus.scala:80)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(LiveListenerBus.scala:78)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1249)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1.run(LiveListenerBus.scala:77)
Exception: sbt.TrapExitSecurityException thrown from the UncaughtExceptionHandler in thread "run-main-0"
16/12/22 15:36:43 ERROR ContextCleaner: Error in cleaning thread
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:175)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1249)
at org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:172)
at org.apache.spark.ContextCleaner$$anon$1.run(ContextCleaner.scala:67)
Knowing that I stopped the object of spark (sc.stop() ) at the end of my code, but I still got the same error. May be there is insufficient memory, I changed the configuration to a executor memory than the driver memory, in the following:
val conf = new SparkConf().setAppName("Simple project").setMaster("local[*]").set("spark.executor.memory", "2g")
val sc = new SparkContext(conf)
But always I have the same error.
Can you help me by an ideas, where's exactly my error, in the configuration of the memory or another thing ?
Knowing that I stopped the object of spark (sc.stop() ) at the end of my code, but I still got the same error.
Stopping the spark context (sc.stop()) without waiting for the job to complete could be the reason for this. Make sure you call sc.stop() only after calling all your spark actions.

No theme found for specified theme id liferay

I have been deploying a custom theme recently with no problems. However on a recent deploy Liferay reverted to the default classic theme and i had these errors in the log:
It doesn't unregister or register...
11:10:20,722 INFO [ServerService Thread Pool -- 254][PluginPackageUtil:1049] Reading plugin package for new-theme
11:10:20,731 INFO [ServerService Thread Pool -- 254][ThemeHotDeployListener:129] Unregistering themes for new-theme
11:10:20,732 INFO [ServerService Thread Pool -- 254][ThemeHotDeployListener:164] 0 themes for new-theme was unregistered
11:10:21,081 INFO [ServerService Thread Pool -- 273][HotDeployImpl:185] Deploying new-theme from queue
11:10:21,082 INFO [ServerService Thread Pool -- 273][PluginPackageUtil:1049] Reading plugin package for new-theme
11:10:21,148 INFO [ServerService Thread Pool -- 273][ThemeHotDeployListener:89] Registering themes for new-theme
11:10:21,232 INFO [ServerService Thread Pool -- 273][ThemeHotDeployListener:107] 0 themes for new-theme are available for use
There is nothing else related to the theme deployment in the logs as to why it hasn't been deployed.
It seems that your theme did not deployed correctly. This might happen if you have changed the database after deploying the theme. please check if your theme is deployed properly, it is present in your web-apps folder.
and you are referring to same database.

JBoss shutdown takes forever

I am having issues in stopping the jboss. Most of the times the when I execute the shutdown. it stops the server in couple of seconds.
But some times it takes forver to stop and I have to kill the process.
Whenerver the shut down takes long I see the scheduler was running and in logs I see
2014-07-14 19:19:29,124 INFO [org.springframework.scheduling.quartz.SchedulerFactoryBean] (JBoss Shutdown Hook) Shutting down Quartz Scheduler
2014-07-14 19:19:29,124 INFO [org.quartz.core.QuartzScheduler] (JBoss Shutdown Hook) Scheduler scheduler_$_s608203at1vl07shutting down.
2014-07-14 19:19:29,124 INFO [org.quartz.core.QuartzScheduler] (JBoss Shutdown Hook) Scheduler scheduler_$_s608203at1vl07 paused.
and nothing after that.
Make sure the Quartz scheduler thread and all threads in its thread pool are marked as daemon threads so that they do not to prevent the JVM from exiting.
This can be achieved by setting the following Quartz properties respectively:
org.quartz.scheduler.makeSchedulerThreadDaemon=true
org.quartz.threadPool.makeThreadsDaemons=true
While it is safe to mark the scheduler thread as a daemon thread, you should think before you mark your thread pool threads as daemons threads, because when the JVM exits, these "worker" threads can be in the middle of executing some logic that you do not want to abort abruptly. If that is the case, you can have your jobs implement the org.quartz.InterruptableJob interface and implement a JVM shutdown hook somewhere in your application that interrupts all currently executing jobs (the list of which can be obtained from the org.quartz.Scheduler API).