Play framework hanging - rest

Related to question in
Rest server (Play Framework) gets "Read Timed out" exception during load test
I have similar situation using play framework 2.2.4:
java version "1.8.0_31"
Java(TM) SE Runtime Environment (build 1.8.0_31-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.31-b07, mixed mode)
I am testing play with jmeter in my local PC. I hit with 2000 threads and timeout exception occurs, and Socket is somehow not closing.
How do I catch this exception and close the hanging sockets?
StackTrace:
va.util.concurrent.TimeoutException: Futures timed out after [5 seconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) [scala-library.jar:na]
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) [scala-library.jar:na]
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) ~[scala-library.jar:na]
at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread$$anon$3.block(ThreadPoolBuilder.scala:169) ~[akka-actor_2.10.jar:2.2.0]
at scala.concurrent.forkjoin.ForkJoinPool.managedBlock(ForkJoinPool.java:3640) [scala-library.jar:na]
at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread.blockOn(ThreadPoolBuilder.scala:167) ~[akka-actor_2.10.jar:2.2.0]
at scala.concurrent.Await$.result(package.scala:107) ~[scala-library.jar:na]
at play.core.j.FPromiseHelper$.get(FPromiseHelper.scala:53) ~[play_2.10.jar:2.2.4]
at play.core.j.FPromiseHelper.get(FPromiseHelper.scala) ~[play_2.10.jar:2.2.4]
at play.libs.F$Promise.get(F.java:363) ~[play_2.10.jar:2.2.4]
wrap response>>
at com.ganda.common.controllers.filters.WrapResponse.wrap(WrapResponse.java:50) ~[na:na]
db filter below>>>
at com.ganda.common.controllers.filters.CredentialsFilter.call(CredentialsFilter.java:72) ~[na:na]
at play.core.j.JavaAction$$anon$3.apply(JavaAction.scala:91) ~[play_2.10.jar:2.2.4]
at play.core.j.JavaAction$$anon$3.apply(JavaAction.scala:90) ~[play_2.10.jar:2.2.4]
at play.core.j.FPromiseHelper$$anonfun$flatMap$1.apply(FPromiseHelper.scala:82) ~[play_2.10.jar:2.2.4]
at play.core.j.FPromiseHelper$$anonfun$flatMap$1.apply(FPromiseHelper.scala:82) ~[play_2.10.jar:2.2.4]
at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:251) ~[scala-library.jar:na]
at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:249) ~[scala-library.jar:na]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) [scala-library.jar:na]
at play.core.j.HttpExecutionContext$$anon$2.run(HttpExecutionContext.scala:37) ~[play_2.10.jar:2.2.4]
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:42) ~[akka-actor_2.10.jar:2.2.0]
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386) ~[akka-actor_2.10.jar:2.2.0]
Wrapped by: play.api.Application$$anon$1: Execution exception[[TimeoutException: Futures timed out after [5 seconds]]]
at play.api.Application$class.handleError(Application.scala:293) ~[play_2.10.jar:2.2.4]
at play.api.DefaultApplication.handleError(Application.scala:399) [play_2.10.jar:2.2.4]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$3.apply(PlayDefaultUpstreamHandler.scala:264) [play_2.10.jar:2.2.4]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$3.apply(PlayDefaultUpstreamHandler.scala:264) [play_2.10.jar:2.2.4]
at scala.Option.map(Option.scala:145) [scala-library.jar:na]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3.applyOrElse(PlayDefaultUpstreamHandler.scala:264) [play_2.10.jar:2.2.4]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3.applyOrElse(PlayDefaultUpstreamHandler.scala:260) [play_2.10.jar:2.2.4]
at scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:344) [scala-library.jar:na]
at scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:343) [scala-library.jar:na]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) [scala-library.jar:na]
at play.api.libs.iteratee.Execution$$anon$1.execute(Execution.scala:43) [play-iteratees_2.10.jar:2.2.4]
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40) [scala-library.jar:na]
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248) [scala-library.jar:na]
at scala.concurrent.Promise$class.complete(Promise.scala:55) [scala-library.jar:na]
at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153) [scala-library.jar:na]
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235) [scala-library.jar:na]
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235) [scala-library.jar:na]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) [scala-library.jar:na]
at scala.concurrent.forkjoin.ForkJoinTask$AdaptedRunnableAction.exec(ForkJoinTask.java:1361) [scala-library.jar:na]
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [scala-library.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [scala-library.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [scala-library.jar:na]
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [scala-library.jar:na]

What is happening here is that the Play promise is timeouting probably due to slow down introduced by load.
So this reveals a performance issue in your application, you could if it's acceptable increase the promise timeout or profile and improve performances of the promise processing so that it proceeds faster.

Related

How to get rid of 'key not found: source' error and blocking deploying in Play framework 2.4.4

In the play 2.4.4 framework I get at a seemingly random time the error 'key not found: SOURCE' at deployment time. After this happens once there is no way to use that development environment again. I have to go back to a previously saved version of the project and try again. If I make the same code changes (for example something simple like extend a table in a Play HTML page) the same might happen or not. (I use Intellij Idea version 15 Ultimate)
After some research this error message seems to have to do with the generation of the *.template.scala file for the html pages of the play framework.
Suggested older remedies talk about using the the 'play clean update' command but nowadays there seems to be only the Activate and I have not found a way to let id do cleaning and updating.
Any idea of why this is happening almost every 2 or 3 deployments ? What can I do to reset the situation ? Any suggestions are greatly appreciated.
Stack dump follows for information:
--- (Running the application, auto-reloading is enabled) ---
[info] p.c.s.NettyServer - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
Server started, use Alt+D to stop
[info] Compiling 39 Scala sources and 1 Java source to E:\source\scalaIntelliJ\auctioneer\target\scala-2.11\classes...
java.util.NoSuchElementException: key not found: SOURCE
at scala.collection.MapLike$class.default(MapLike.scala:228)
at scala.collection.AbstractMap.default(Map.scala:58)
at scala.collection.MapLike$class.apply(MapLike.scala:141)
at scala.collection.AbstractMap.apply(Map.scala:58)
at play.twirl.compiler.GeneratedSource.source(TwirlCompiler.scala:129)
at play.twirl.compiler.GeneratedSource.sync(TwirlCompiler.scala:138)
at play.twirl.sbt.TemplateCompiler$$anonfun$syncGenerated$2.apply(TemplateCompiler.scala:38)
at play.twirl.sbt.TemplateCompiler$$anonfun$syncGenerated$2.apply(TemplateCompiler.scala:38)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at play.twirl.sbt.TemplateCompiler$.syncGenerated(TemplateCompiler.scala:38)
at play.twirl.sbt.TemplateCompiler$.compile(TemplateCompiler.scala:23)
at play.twirl.sbt.SbtTwirl$$anonfun$compileTemplatesTask$1.apply(SbtTwirl.scala:87)
at play.twirl.sbt.SbtTwirl$$anonfun$compileTemplatesTask$1.apply(SbtTwirl.scala:86)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
at sbt.std.Transform$$anon$4.work(System.scala:63)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
at sbt.Execute.work(Execute.scala:235)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[error] (compile:twirlCompileTemplates) java.util.NoSuchElementException: key not found: SOURCE
[error] application -
! #6po0l44eg - Internal server error, for (GET) [/] ->
play.sbt.PlayExceptions$UnexpectedException: Unexpected exception[NoSuchElementException: key not found: SOURCE]
at play.sbt.run.PlayReload$$anonfun$taskFailureHandler$1.apply(PlayReload.scala:51) ~[na:na]
at play.sbt.run.PlayReload$$anonfun$taskFailureHandler$1.apply(PlayReload.scala:44) ~[na:na]
at scala.Option.map(Option.scala:145) ~[scala-library-2.11.7.jar:na]
at play.sbt.run.PlayReload$.taskFailureHandler(PlayReload.scala:44) ~[na:na]
at play.sbt.run.PlayReload$.compileFailure(PlayReload.scala:40) ~[na:na]
at play.sbt.run.PlayReload$$anonfun$compile$2$$anonfun$apply$3.apply(PlayReload.scala:20) ~[na:na]
at play.sbt.run.PlayReload$$anonfun$compile$2$$anonfun$apply$3.apply(PlayReload.scala:20) ~[na:na]
at scala.util.Either$LeftProjection.map(Either.scala:377) ~[scala-library-2.11.7.jar:na]
at play.sbt.run.PlayReload$$anonfun$compile$2.apply(PlayReload.scala:20) ~[na:na]
at play.sbt.run.PlayReload$$anonfun$compile$2.apply(PlayReload.scala:18) ~[na:na]
Caused by: java.util.NoSuchElementException: key not found: SOURCE
at scala.collection.MapLike$class.default(MapLike.scala:228) ~[scala-library-2.11.7.jar:na]
at scala.collection.AbstractMap.default(Map.scala:58) ~[scala-library-2.11.7.jar:na]
at scala.collection.MapLike$class.apply(MapLike.scala:141) ~[scala-library-2.11.7.jar:na]
at scala.collection.AbstractMap.apply(Map.scala:58) ~[scala-library-2.11.7.jar:na]
at play.twirl.compiler.GeneratedSource.source(TwirlCompiler.scala:129) ~[na:na]
at play.twirl.compiler.GeneratedSource.sync(TwirlCompiler.scala:138) ~[na:na]
at play.twirl.sbt.TemplateCompiler$$anonfun$syncGenerated$2.apply(TemplateCompiler.scala:38) ~[na:na]
at play.twirl.sbt.TemplateCompiler$$anonfun$syncGenerated$2.apply(TemplateCompiler.scala:38) ~[na:na]
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) ~[scala-library-2.11.7.jar:na]
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) ~[scala-library-2.11.7.jar:na]

How to investigate failing dataproc worker processes?

I'm running a PySpark job and Im having trouble determining the cause of failure on worker processes.
While my job is running I started noticing stack traces in the job output such as:
16/04/10 03:24:21 WARN org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1460240417530_0021_01_000003 on host: cluster-2-w-0.c.my-project.internal. Exit status: -100. Diagnostics: Container released on a *lost* node
[Stage 0:=================================> (19 + 13) / 32]16/04/10 03:26:21 WARN org.apache.spark.rpc.netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(2,Container marked as failed: container_1460240417530_0021_01_000003 on host: cluster-2-w-0.c.my-project.internal. Exit status: -100. Diagnostics: Container released on a *lost* node)] in 1 attempts
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.askTimeout
at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:76)
at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77)
at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.removeExecutor(CoarseGrainedSchedulerBackend.scala:359)
at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$receive$1.applyOrElse(YarnSchedulerBackend.scala:176)
at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:116)
at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204)
at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [120 seconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:107)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
... 11 more
16/04/10 03:26:40 WARN org.apache.spark.rpc.netty.NettyRpcEndpointRef: Error sending message [message = RequestExecutors(23,0,Map())] in 1 attempts
org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply in 120 seconds. This timeout is controlled by spark.rpc.askTimeout
at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Failure.recover(Try.scala:185)
at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324)
at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at org.spark-project.guava.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)
at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:133)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
at scala.concurrent.Promise$class.complete(Promise.scala:55)
at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.processBatch$1(Future.scala:643)
at scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply$mcV$sp(Future.scala:658)
at scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply(Future.scala:635)
at scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply(Future.scala:635)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at scala.concurrent.Future$InternalCallbackExecutor$Batch.run(Future.scala:634)
at scala.concurrent.Future$InternalCallbackExecutor$.scala$concurrent$Future$InternalCallbackExecutor$$unbatchedExecute(Future.scala:694)
at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:685)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
at scala.concurrent.Promise$class.tryFailure(Promise.scala:112)
at scala.concurrent.impl.Promise$DefaultPromise.tryFailure(Promise.scala:153)
at org.apache.spark.rpc.netty.NettyRpcEnv$$anon$1.run(NettyRpcEnv.scala:241)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: Cannot receive any reply in 120 seconds
at org.apache.spark.rpc.netty.NettyRpcEnv$$anon$1.run(NettyRpcEnv.scala:242)
... 7 more
[Stage 0:=================================> (19 + 13) / 32]
Ill also notice the overall CPU usage of the cluster slowly drop as worker nodes fail. These nodes seem to permanently fail and do not re-join the cluster:
I'm using preemtible machines but when I check the status of these machines they are still running and have not been preempted. So I'm guessing its something wrong on the worker:
It could be because of the heavy workload in the workers. Try to increase spark.network.timeout (default 120) to a bigger number.
If that is not resolving the error, most likely causes are garbage collection. Try to run a memory profile with the following options.
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/ -XX:+CMSClassUnloadingEnabled

Database schema is different - Exception in OrientDB - Baasbox

I am getting the following exception on starting up baasbox (it has embedded orientdb).
This exception happens when I connect to the database using console.sh/connect:plocal and then run a few select queries, but doesn't happen the first time I startup baasbox.
Any help is appreciated.
Orientdb version: 1.7.10
com.orientechnologies.orient.core.exception.OConfigurationException: Database schema is different. Please export your old database with the previous version of OrientDB and reimport it using the current one.
at com.orientechnologies.orient.core.metadata.schema.OSchemaShared.fromStream(OSchemaShared.java:800) ~[orientdb-core-1.7.10.jar:1.7.10]
at com.orientechnologies.orient.core.type.ODocumentWrapperNoClass.reload(ODocumentWrapperNoClass.java:70) ~[orientdb-core-1.7.10.jar:1.7.10]
at com.orientechnologies.orient.core.metadata.schema.OSchemaShared.load(OSchemaShared.java:921) ~[orientdb-core-1.7.10.jar:1.7.10]
at com.orientechnologies.orient.core.metadata.OMetadataDefault$1.call(OMetadataDefault.java:115) ~[orientdb-core-1.7.10.jar:1.7.10]
at com.orientechnologies.orient.core.metadata.OMetadataDefault$1.call(OMetadataDefault.java:110) ~[orientdb-core-1.7.10.jar:1.7.10]
at com.orientechnologies.common.concur.resource.OSharedContainerImpl.getResource(OSharedContainerImpl.java:53) ~[orient-commons-1.7.10.jar:1.7.10]
Wrapped by: com.orientechnologies.common.exception.OException: Error on creation of shared resource
at com.orientechnologies.common.concur.resource.OSharedContainerImpl.getResource(OSharedContainerImpl.java:55) ~[orient-commons-1.7.10.jar:1.7.10]
at com.orientechnologies.orient.core.metadata.OMetadataDefault.init(OMetadataDefault.java:110) ~[orientdb-core-1.7.10.jar:1.7.10]
at com.orientechnologies.orient.core.metadata.OMetadataDefault.load(OMetadataDefault.java:68) ~[orientdb-core-1.7.10.jar:1.7.10]
at com.orientechnologies.orient.core.db.record.ODatabaseRecordAbstract.open(ODatabaseRecordAbstract.java:291) ~[orientdb-core-1.7.10.jar:1.7.10]
at com.orientechnologies.orient.core.db.ODatabaseWrapperAbstract.open(ODatabaseWrapperAbstract.java:49) ~[orientdb-core-1.7.10.jar:1.7.10]
at com.baasbox.db.DbHelper.open(DbHelper.java:353) ~[classes/:na]
at com.baasbox.Global.onStart(Global.java:169) ~[classes/:na]
at play.core.j.JavaGlobalSettingsAdapter.onStart(JavaGlobalSettingsAdapter.scala:18) [play_2.10.jar:2.2.4]
at play.api.GlobalPlugin.onStart(GlobalSettings.scala:203) [play_2.10.jar:2.2.4]
at play.api.Play$$anonfun$start$1$$anonfun$apply$mcV$sp$1.apply(Play.scala:88) [play_2.10.jar:2.2.4]
at play.api.Play$$anonfun$start$1$$anonfun$apply$mcV$sp$1.apply(Play.scala:88) [play_2.10.jar:2.2.4]
at scala.collection.immutable.List.foreach(List.scala:318) [scala-library.jar:na]
at play.api.Play$$anonfun$start$1.apply$mcV$sp(Play.scala:88) [play_2.10.jar:2.2.4]
at play.api.Play$$anonfun$start$1.apply(Play.scala:88) [play_2.10.jar:2.2.4]
at play.api.Play$$anonfun$start$1.apply(Play.scala:88) [play_2.10.jar:2.2.4]
at play.utils.Threads$.withContextClassLoader(Threads.scala:18) [play_2.10.jar:2.2.4]
at play.api.Play$.start(Play.scala:87) [play_2.10.jar:2.2.4]
at play.core.ReloadableApplication$$anonfun$get$1$$anonfun$apply$1$$anonfun$1.apply(ApplicationProvider.scala:139) [play_2.10.jar:2.2.4]
at play.core.ReloadableApplication$$anonfun$get$1$$anonfun$apply$1$$anonfun$1.apply(ApplicationProvider.scala:112) [play_2.10.jar:2.2.4]
at scala.Option.map(Option.scala:145) [scala-library.jar:na]
at play.core.ReloadableApplication$$anonfun$get$1$$anonfun$apply$1.apply(ApplicationProvider.scala:112) [play_2.10.jar:2.2.4]
at play.core.ReloadableApplication$$anonfun$get$1$$anonfun$apply$1.apply(ApplicationProvider.scala:110) [play_2.10.jar:2.2.4]
at scala.util.Success.flatMap(Try.scala:200) [scala-library.jar:na]
at play.core.ReloadableApplication$$anonfun$get$1.apply(ApplicationProvider.scala:110) [play_2.10.jar:2.2.4]
at play.core.ReloadableApplication$$anonfun$get$1.apply(ApplicationProvider.scala:102) [play_2.10.jar:2.2.4]
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) [scala-library.jar:na]
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) [scala-library.jar:na]
at scala.concurrent.forkjoin.ForkJoinTask$AdaptedRunnableAction.exec(ForkJoinTask.java:1361) [scala-library.jar:na]
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [scala-library.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [scala-library.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [scala-library.jar:na]
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [scala-library.jar:na]
I got this reply from the baasbox team:
When you use console.sh (the OrientDB console tool), be sure the BaasBox is not running, otherwise you can have data corruption.
However now it is clear that the error occurs after you have accessed the db via the console using a different version of the ODB tool.
I was using a different version of Orientdb tool (console.sh v2.0) than the orientdb version 1.7.10 used in baasbox. This caused the corruption and may have messed up the DB schema.

Scala typesafe-activator project timeout osx

I'm playing a bit with scala and play framework, but when i try to run my test project using activator, sbt on a mac i keep getting:
[info] Caused by: java.security.PrivilegedActionException: null
[info] at java.security.AccessController.doPrivileged(Native Method) ~[na:1.8.0_60]
[info] at play.runsupport.Reloader$.play$runsupport$Reloader$$withReloaderContextClassLoader(Reloader.scala:39) ~[na:na]
[info] at play.runsupport.Reloader.reload(Reloader.scala:321) ~[na:na]
[info] at play.core.server.DevServerStart$$anonfun$mainDev$1$$anon$1$$anonfun$get$1.apply(DevServerStart.scala:113) ~[play-server_2.11-2.4.3.jar:2.4.3]
[info] at play.core.server.DevServerStart$$anonfun$mainDev$1$$anon$1$$anonfun$get$1.apply(DevServerStart.scala:111) ~[play-server_2.11-2.4.3.jar:2.4.3]
[info] at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) ~[scala-library-2.11.7.jar:na]
[info] at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) ~[scala-library-2.11.7.jar:na]
[info] at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40) ~[akka-actor_2.11-2.3.13.jar:na]
[info] at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397) ~[akka-actor_2.11-2.3.13.jar:na]
[info] at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) ~[scala-library-2.11.7.jar:na]
[info] at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) ~[scala-library-2.11.7.jar:na]
[info] at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) ~[scala-library-2.11.7.jar:na]
[info] at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) ~[scala-library-2.11.7.jar:na]
[info] Caused by: java.util.concurrent.TimeoutException: Futures timed out after [300000 milliseconds]
[info] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) ~[scala-library-2.11.7.jar:na]
[info] at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) ~[scala-library-2.11.7.jar:na]
[info] at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190) ~[scala-library-2.11.7.jar:na]
[info] at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) ~[scala-library-2.11.7.jar:na]
[info] at scala.concurrent.Await$.result(package.scala:190) ~[scala-library-2.11.7.jar:na]
[info] at play.forkrun.ForkRun$$anonfun$askForReload$1.apply(ForkRun.scala:127) ~[na:na]
[info] at play.forkrun.ForkRun$$anonfun$askForReload$1.apply(ForkRun.scala:125) ~[na:na]
[info] at play.runsupport.Reloader$$anonfun$reload$1.apply(Reloader.scala:323) ~[na:na]
[info] at play.runsupport.Reloader$$anon$3.run(Reloader.scala:43) ~[na:na]
I'm not really sure what am i doing wrong.
I had the same problem and I've found the solution here https://support.pivotal.io/hc/en-us/articles/201821186-Namenode-fails-to-start-with-error-jurisdiction-policy-files-are-not-signed-by-a-trusted-signer-
I already had installed Java 8 JDK on my machine so I had to download the policy files from this website:
http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html
Then I just pasted the files to the location which is provided in the "readme.txt".
The loction depends on the OS that you are using.
And everything works fine :)

Weird GWT compile error [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
exception in GWT RPC app
When compiling my project (2, I'm getting this exception. Anyone has seen this before?
[ERROR] Failure in unit cache map load.
java.util.concurrent.ExecutionException: java.lang.RuntimeException: Unable to read from byte cache
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
at java.util.concurrent.FutureTask.get(FutureTask.java:83)
at com.google.gwt.dev.javac.PersistentUnitCache.awaitUnitCacheMapLoad(PersistentUnitCache.java:466)
at com.google.gwt.dev.javac.PersistentUnitCache.find(PersistentUnitCache.java:391)
at com.google.gwt.dev.javac.CompilationStateBuilder.addArchive(CompilationStateBuilder.java:365)
at com.google.gwt.dev.ArchivePreloader.preloadArchives(ArchivePreloader.java:65)
at com.google.gwt.dev.Precompile.precompile(Precompile.java:243)
at com.google.gwt.dev.Precompile.precompile(Precompile.java:229)
at com.google.gwt.dev.Precompile.precompile(Precompile.java:141)
at com.google.gwt.dev.Compiler.run(Compiler.java:232)
at com.google.gwt.dev.Compiler.run(Compiler.java:198)
at com.google.gwt.dev.Compiler$1.run(Compiler.java:170)
at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:88)
at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:82)
at com.google.gwt.dev.Compiler.main(Compiler.java:177)
Caused by: java.lang.RuntimeException: Unable to read from byte cache
at com.google.gwt.dev.util.DiskCache.transferFromStream(DiskCache.java:171)
at com.google.gwt.dev.util.DiskCacheToken.readObject(DiskCacheToken.java:87)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:974)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1848)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1946)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1870)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:350)
at java.util.ArrayList.readObject(ArrayList.java:593)
at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:974)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1848)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:350)
at com.google.gwt.dev.javac.CachedCompilationUnit.readObject(CachedCompilationUnit.java:205)
at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:974)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1848)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:350)
at com.google.gwt.dev.javac.PersistentUnitCache.loadUnitMap(PersistentUnitCache.java:517)
at com.google.gwt.dev.javac.PersistentUnitCache.access$800(PersistentUnitCache.java:96)
at com.google.gwt.dev.javac.PersistentUnitCache$4.run(PersistentUnitCache.java:222)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.StreamCorruptedException: unexpected EOF in middle of data block
at java.io.ObjectInputStream$BlockDataInputStream.refill(ObjectInputStream.java:2494)
at java.io.ObjectInputStream$BlockDataInputStream.read(ObjectInputStream.java:2657)
at java.io.ObjectInputStream.read(ObjectInputStream.java:843)
at java.io.InputStream.read(InputStream.java:82)
at com.google.gwt.dev.util.DiskCache.transferFromStream(DiskCache.java:159)
... 40 more
From this link:
exception in GWT RPC app
Most likely cache has become corrupted for some reason, so try to
remove folder gwt-UnitCache from your project, this should help.