How do I set TTL and catch exceptions when using op-rabbit? - scala

I have some op-rabbit code that looks like this:
val subscriptionRef: SubscriptionRef = Subscription.run(rabbitControl) {
val directive = body(UTF8StringMarshaller) & routingKey
channel(qos = MAX_CONCURRENT_MSGS) {
consume(topic(queue(inputQueue), List(inputKey))) {
directive((s, key) => {
processMessage(s, key)
ack
})
}
}
}
It runs fine in some applications, but in my latest application, I got 5GB of errors in logs in just a few minutes. I'm trying to figure out where to handle exceptions. The cause of the error appears to be a mismatch in the queue's TTL (30 mins. or 1800000 ms), and what the application is expecting (apparently nothing). I want to specify the TTL, and if there is a problem, I want to log it and then shutdown immediately. I do not want to have a filesystem filled with stack traces like this:
18:39:08.518 [such-system-akka.actor.default-dispatcher-9] ERROR com.spingo.op_rabbit.SubscriptionActor - Connection related error while trying to re-bind a consumer to EXCHANGE.QUEUE. Waiting in anticipating of a new channel.
java.io.IOException: null
at com.rabbitmq.client.impl.AMQChannel.wrap(AMQChannel.java:105) ~[amqp-client-4.0.0.jar:4.0.0]
at com.rabbitmq.client.impl.AMQChannel.wrap(AMQChannel.java:101) ~[amqp-client-4.0.0.jar:4.0.0]
at com.rabbitmq.client.impl.AMQChannel.exnWrappingRpc(AMQChannel.java:123) ~[amqp-client-4.0.0.jar:4.0.0]
at com.rabbitmq.client.impl.ChannelN.queueDeclare(ChannelN.java:948) ~[amqp-client-4.0.0.jar:4.0.0]
at com.rabbitmq.client.impl.ChannelN.queueDeclare(ChannelN.java:50) ~[amqp-client-4.0.0.jar:4.0.0]
at com.spingo.op_rabbit.QueueConcrete.declare(Queue.scala:31) ~[op-rabbit-core_2.12-2.0.0.jar:2.0.0]
at com.spingo.op_rabbit.Binding$$anon$2.declare(Binding.scala:79) ~[op-rabbit-core_2.12-2.0.0.jar:2.0.0]
at com.spingo.op_rabbit.SubscriptionActor.doSubscribe(SubscriptionActor.scala:222) ~[op-rabbit-core_2.12-2.0.0.jar:2.0.0]
at com.spingo.op_rabbit.SubscriptionActor$$anonfun$8.applyOrElse(SubscriptionActor.scala:170) ~[op-rabbit-core_2.12-2.0.0.jar:2.0.0]
at com.spingo.op_rabbit.SubscriptionActor$$anonfun$8.applyOrElse(SubscriptionActor.scala:157) ~[op-rabbit-core_2.12-2.0.0.jar:2.0.0]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:34) ~[scala-library-2.12.4.jar:?]
at akka.actor.FSM.$anonfun$handleTransition$1(FSM.scala:608) ~[akka-actor_2.12-2.5.4.jar:?]
at akka.actor.FSM.$anonfun$handleTransition$1$adapted(FSM.scala:608) ~[akka-actor_2.12-2.5.4.jar:?]
at scala.collection.immutable.List.foreach(List.scala:389) ~[scala-library-2.12.4.jar:?]
at akka.actor.FSM.handleTransition(FSM.scala:608) ~[akka-actor_2.12-2.5.4.jar:?]
at akka.actor.FSM.makeTransition(FSM.scala:690) ~[akka-actor_2.12-2.5.4.jar:?]
at akka.actor.FSM.makeTransition$(FSM.scala:683) ~[akka-actor_2.12-2.5.4.jar:?]
at com.spingo.op_rabbit.SubscriptionActor.makeTransition(SubscriptionActor.scala:11) ~[op-rabbit-core_2.12-2.0.0.jar:2.0.0]
at akka.actor.FSM.applyState(FSM.scala:675) ~[akka-actor_2.12-2.5.4.jar:?]
at akka.actor.FSM.applyState$(FSM.scala:673) ~[akka-actor_2.12-2.5.4.jar:?]
at com.spingo.op_rabbit.SubscriptionActor.applyState(SubscriptionActor.scala:11) ~[op-rabbit-core_2.12-2.0.0.jar:2.0.0]
at akka.actor.FSM.processEvent(FSM.scala:670) ~[akka-actor_2.12-2.5.4.jar:?]
at akka.actor.FSM.processEvent$(FSM.scala:662) ~[akka-actor_2.12-2.5.4.jar:?]
at com.spingo.op_rabbit.SubscriptionActor.akka$actor$LoggingFSM$$super$processEvent(SubscriptionActor.scala:11) ~[op-rabbit-core_2.12-2.0.0.jar:2.0.0]
at akka.actor.LoggingFSM.processEvent(FSM.scala:801) ~[akka-actor_2.12-2.5.4.jar:?]
at akka.actor.LoggingFSM.processEvent$(FSM.scala:783) ~[akka-actor_2.12-2.5.4.jar:?]
at com.spingo.op_rabbit.SubscriptionActor.processEvent(SubscriptionActor.scala:11) ~[op-rabbit-core_2.12-2.0.0.jar:2.0.0]
at akka.actor.FSM.akka$actor$FSM$$processMsg(FSM.scala:659) ~[akka-actor_2.12-2.5.4.jar:?]
at akka.actor.FSM$$anonfun$receive$1.applyOrElse(FSM.scala:653) ~[akka-actor_2.12-2.5.4.jar:?]
at akka.actor.Actor.aroundReceive(Actor.scala:514) ~[akka-actor_2.12-2.5.4.jar:?]
at akka.actor.Actor.aroundReceive$(Actor.scala:512) ~[akka-actor_2.12-2.5.4.jar:?]
at com.spingo.op_rabbit.SubscriptionActor.aroundReceive(SubscriptionActor.scala:11) ~[op-rabbit-core_2.12-2.0.0.jar:2.0.0]
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:527) [akka-actor_2.12-2.5.4.jar:?]
at akka.actor.ActorCell.invoke(ActorCell.scala:496) [akka-actor_2.12-2.5.4.jar:?]
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257) [akka-actor_2.12-2.5.4.jar:?]
at akka.dispatch.Mailbox.run(Mailbox.scala:224) [akka-actor_2.12-2.5.4.jar:?]
at akka.dispatch.Mailbox.exec(Mailbox.scala:234) [akka-actor_2.12-2.5.4.jar:?]
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [akka-actor_2.12-2.5.4.jar:?]
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [akka-actor_2.12-2.5.4.jar:?]
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [akka-actor_2.12-2.5.4.jar:?]
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [akka-actor_2.12-2.5.4.jar:?]
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - inequivalent arg 'x-message-ttl' for queue 'EXCHANGE.QUEUE' in vhost '/': received none but current is the value '1800000' of type 'long', class-id=50, method-id=10)
at com.rabbitmq.utility.ValueOrException.getValue(ValueOrException.java:66) ~[amqp-client-4.0.0.jar:4.0.0]
at com.rabbitmq.utility.BlockingValueOrException.uninterruptibleGetValue(BlockingValueOrException.java:32) ~[amqp-client-4.0.0.jar:4.0.0]
at com.rabbitmq.client.impl.AMQChannel$BlockingRpcContinuation.getReply(AMQChannel.java:366) ~[amqp-client-4.0.0.jar:4.0.0]
at com.rabbitmq.client.impl.AMQChannel.privateRpc(AMQChannel.java:229) ~[amqp-client-4.0.0.jar:4.0.0]
at com.rabbitmq.client.impl.AMQChannel.exnWrappingRpc(AMQChannel.java:117) ~[amqp-client-4.0.0.jar:4.0.0]
... 38 more
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - inequivalent arg 'x-message-ttl' for queue 'EXCHANGE.QUEUE' in vhost '/': received none but current is the value '1800000' of type 'long', class-id=50, method-id=10)
at com.rabbitmq.client.impl.ChannelN.asyncShutdown(ChannelN.java:505) ~[amqp-client-4.0.0.jar:4.0.0]
at com.rabbitmq.client.impl.ChannelN.processAsync(ChannelN.java:336) ~[amqp-client-4.0.0.jar:4.0.0]
at com.rabbitmq.client.impl.AMQChannel.handleCompleteInboundCommand(AMQChannel.java:143) ~[amqp-client-4.0.0.jar:4.0.0]
at com.rabbitmq.client.impl.AMQChannel.handleFrame(AMQChannel.java:90) ~[amqp-client-4.0.0.jar:4.0.0]
at com.rabbitmq.client.impl.AMQConnection.readFrame(AMQConnection.java:634) ~[amqp-client-4.0.0.jar:4.0.0]
at com.rabbitmq.client.impl.AMQConnection.access$300(AMQConnection.java:47) ~[amqp-client-4.0.0.jar:4.0.0]
at com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:572) ~[amqp-client-4.0.0.jar:4.0.0]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_144]
18:39:08.534 [such-system-akka.actor.default-dispatcher-9] ERROR com.spingo.op_rabbit.SubscriptionActor - Connection related error while trying to re-bind a consumer to EXCHANGE.QUEUE. Waiting in anticipating of a new channel.

You are attempting to re-declare the queue using a value of x-message-ttl of zero. Delete the queue first, then it can be declared with whatever your code would like.
channel error; protocol method: #method(reply-code=406, reply-text=PRECONDITION_FAILED - inequivalent arg 'x-message-ttl' for queue 'EXCHANGE.QUEUE' in vhost '/': received none but current is the value '1800000' of type 'long', class-id=50, method-id=10
To be completely clear, this is the per-message TTL, not the per-queue TTL. There is a difference and that might be your issue.
Otherwise, the code is going to keep trying to do the same thing, and the broker is going to continue to reject the operation.

Related

File domain patron error when running spark streaming

I faced this error when running my application for several hours.
My spark application read stream from a streaming hudi table (hudi table that is constantly updated) and write to a parquet file. There is another stream read that same parquet file and write to another hudi table. The flow is as follow
Hudi -> stream 1 -> parquet -> stream 2 -> hudi
I can see the error appears when stream 2 read from the parquet file. The underlying storage is OneFS
User class threw exception: org.apache.spark.sql.streaming.StreamingQueryException: Failed to get file domain patron for path /path/_temporary. Error: Name: _temporary Status: STATUS_OBJECT_NAME_NOT_FOUND
=== Streaming Query ===
Identifier: [id = 72e6b29c-a641-47ff-82fc-ccd8146a4226, runId = 22af0796-7cb4-4599-b29f-ee95bda27cb3]
Current Committed Offsets: {FileStreamSource[hdfs://path]: {"logOffset":60}}
Current Available Offsets: {FileStreamSource[hdfs://path]: {"logOffset":60}}
Current State: ACTIVE
Thread State: RUNNABLE
Logical Plan:
FileStreamSource[hdfs://path]
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:356)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:244)
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed to get file domain patron for path path/_temporary. Error: Name: _temporary Status: STATUS_OBJECT_NAME_NOT_FOUND
at org.apache.hadoop.ipc.Client.call(Client.java:1476)
at org.apache.hadoop.ipc.Client.call(Client.java:1413)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy10.getListing(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:578)
at sun.reflect.GeneratedMethodAccessor54.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy11.getListing(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2086)
at org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.<init>(DistributedFileSystem.java:944)
at org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.<init>(DistributedFileSystem.java:927)
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:872)
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:868)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.listLocatedStatus(DistributedFileSystem.java:886)
at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1696)
at org.apache.spark.util.HadoopFSUtils$.listLeafFiles(HadoopFSUtils.scala:220)
at org.apache.spark.util.HadoopFSUtils$.$anonfun$parallelListLeafFilesInternal$1(HadoopFSUtils.scala:95)
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at scala.collection.TraversableLike.map(TraversableLike.scala:238)
at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
at scala.collection.AbstractTraversable.map(Traversable.scala:108)
at org.apache.spark.util.HadoopFSUtils$.parallelListLeafFilesInternal(HadoopFSUtils.scala:85)
at org.apache.spark.util.HadoopFSUtils$.parallelListLeafFiles(HadoopFSUtils.scala:69)
at org.apache.spark.sql.execution.datasources.InMemoryFileIndex$.bulkListLeafFiles(InMemoryFileIndex.scala:158)
at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.listLeafFiles(InMemoryFileIndex.scala:131)
at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.refresh0(InMemoryFileIndex.scala:94)
at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.<init>(InMemoryFileIndex.scala:66)
at org.apache.spark.sql.execution.streaming.FileStreamSource.allFilesUsingInMemoryFileIndex(FileStreamSource.scala:248)
at org.apache.spark.sql.execution.streaming.FileStreamSource.fetchAllFiles(FileStreamSource.scala:301)
at org.apache.spark.sql.execution.streaming.FileStreamSource.fetchMaxOffset(FileStreamSource.scala:128)
at org.apache.spark.sql.execution.streaming.FileStreamSource.latestOffset(FileStreamSource.scala:325)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$constructNextBatch$3(MicroBatchExecution.scala:394)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:357)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:355)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:68)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$constructNextBatch$2(MicroBatchExecution.scala:385)
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
at scala.collection.immutable.Map$Map1.foreach(Map.scala:128)
at scala.collection.TraversableLike.map(TraversableLike.scala:238)
at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
at scala.collection.AbstractTraversable.map(Traversable.scala:108)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$constructNextBatch$1(MicroBatchExecution.scala:382)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.withProgressLocked(MicroBatchExecution.scala:613)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.constructNextBatch(MicroBatchExecution.scala:378)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:

AnnotatedConnectException: finishConnect(..) failed: Connection refused: localhost/0:0:0:0:0:0:0:1:53686 error when executing beam pipeline on flink

I'm trying to setup a go-beam-flink-kafka cluster locally. I started the kafka cluster along with producer. I then started expansion service which is required for cross language features of kafkaio api of beam go sdk on port 8097. Then, I started flink job server with the command docker run --platform linux/amd64 --net=host apache/beam_flink1.14_job_server:latest which starts it's own embedded flink cluster. Then, I write a simple pipeline to read from kafka and print the values as below.
type LogFn struct{}
func (fn *LogFn) ProcessElement(ctx context.Context, elm []byte) {
log.Infof(ctx, "Ride info: %v", string(elm))
}
//FinishBundle waits a bit so the job server finishes receiving logs.
func (fn *LogFn) FinishBundle() {
time.Sleep(2 * time.Second)
}
func init() {
register.DoFn2x0[context.Context, []byte](&LogFn{})
}
func main() {
flag.Parse()
beam.Init()
p, s := beam.NewPipelineWithRoot()
ctx := context.Background()
messages := kafkaio.Read(s,
"localhost:8097",
"localhost:9092",
[]string{"quickstart-events"},
)
vals := beam.DropKey(s, messages)
beam.ParDo0(s, &LogFn{}, vals)
if _, err := flink.Execute(ctx, p); err != nil {
log.Fatalf(ctx, "Failed to execute job: %v", err.Error())
}
}
Then, I start the execution of the pipeline with the command go run main.go --endpoint localhost:8099 --environment_type LOOPBACK. But I'm getting this error
SEVERE: Error during job invocation go0job0101666771397124223000-root-1026080321-9741bbf6_e2058fe0-971e-4b98-b96e-77d2cb52fee0.
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
at org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:258)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47)
at akka.dispatch.OnComplete.internal(Future.scala:300)
at akka.dispatch.OnComplete.internal(Future.scala:297)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:224)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:221)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68)
at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284)
at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284)
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284)
at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:621)
at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:24)
at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:23)
at scala.concurrent.Future.$anonfun$andThen$1(Future.scala:532)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:63)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:100)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:100)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:252)
at org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:242)
at org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:233)
at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:684)
at org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:79)
at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:444)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:316)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:314)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:217)
at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
at scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
at akka.actor.Actor.aroundReceive(Actor.scala:537)
at akka.actor.Actor.aroundReceive$(Actor.scala:535)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
at akka.actor.ActorCell.invoke(ActorCell.scala:548)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
at akka.dispatch.Mailbox.run(Mailbox.scala:231)
at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
... 4 more
Caused by: org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException: org.apache.beam.vendor.grpc.v1p48p1.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2050)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964)
at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init>(DefaultJobBundleFactory.java:451)
at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init>(DefaultJobBundleFactory.java:436)
at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.forStage(DefaultJobBundleFactory.java:303)
at org.apache.beam.runners.fnexecution.control.DefaultExecutableStageContext.getStageBundleFactory(DefaultExecutableStageContext.java:38)
at org.apache.beam.runners.fnexecution.control.ReferenceCountingExecutableStageContextFactory$WrappedContext.getStageBundleFactory(ReferenceCountingExecutableStageContextFactory.java:202)
at org.apache.beam.runners.flink.translation.wrappers.streaming.ExecutableStageDoFnOperator.open(ExecutableStageDoFnOperator.java:248)
at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:110)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:711)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:687)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:654)
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.beam.vendor.grpc.v1p48p1.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
at org.apache.beam.vendor.grpc.v1p48p1.io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:271)
at org.apache.beam.vendor.grpc.v1p48p1.io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:252)
at org.apache.beam.vendor.grpc.v1p48p1.io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:165)
at org.apache.beam.model.fnexecution.v1.BeamFnExternalWorkerPoolGrpc$BeamFnExternalWorkerPoolBlockingStub.startWorker(BeamFnExternalWorkerPoolGrpc.java:225)
at org.apache.beam.runners.fnexecution.environment.ExternalEnvironmentFactory.createEnvironment(ExternalEnvironmentFactory.java:113)
at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:252)
at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:231)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154)
at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044)
... 20 more
Caused by: org.apache.beam.vendor.grpc.v1p48p1.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: localhost/0:0:0:0:0:0:0:1:53686
Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused
at org.apache.beam.vendor.grpc.v1p48p1.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155)
at org.apache.beam.vendor.grpc.v1p48p1.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128)
at org.apache.beam.vendor.grpc.v1p48p1.io.netty.channel.unix.Socket.finishConnect(Socket.java:321)
at org.apache.beam.vendor.grpc.v1p48p1.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710)
at org.apache.beam.vendor.grpc.v1p48p1.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687)
at org.apache.beam.vendor.grpc.v1p48p1.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567)
at org.apache.beam.vendor.grpc.v1p48p1.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:477)
at org.apache.beam.vendor.grpc.v1p48p1.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:385)
at org.apache.beam.vendor.grpc.v1p48p1.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:995)
at org.apache.beam.vendor.grpc.v1p48p1.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at org.apache.beam.vendor.grpc.v1p48p1.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:750)
If anyone has setup the exact stack, can you help resolving this issue?

Spring boot with MongodB but connection to Azure Cosmos

I am having spring Mongodb app, I wanted to take it to Azure. So, I decided to use Cosmos db. I made following changes to my application.properties file
spring.data.mongodb.uri = mongodb://[username]:[password]#[dbname].documents.azure.com:10255/?ssl=true
spring.data.mongodb.database=dbname
I am getting following exception:
ActivityId: 25611363-0000-0000-0000-000000000000, Request URI: /apps/bbbd93b0-83ee-44a2-9015-ca7226457764/services/63c75889-e342-42b3-81b0-4851cae426d7/partitions/89ba02e8-b034-4b75-b8a0-57194d79f785/replicas/131587440168033880p, RequestStats: , SDK: Microsoft.Azure.Documents.Common/1.19.121.4' on server dz5prdddc02-docdb-1.documents.azure.com:10255. The full response is { "_t" : "OKMongoResponse", "ok" : 0, "code" : 2, "errmsg" : "Message: {\"Errors\":[\"Too many 'included' paths (106) specified in policy. A maximum of 100 is allowed.\"]}\r\nActivityId: 25611363-0000-0000-0000-000000000000, Request URI: /apps/bbbd93b0-83ee-44a2-9015-ca7226457764/services/63c75889-e342-42b3-81b0-4851cae426d7/partitions/89ba02e8-b034-4b75-b8a0-57194d79f785/replicas/131587440168033880p, RequestStats: , SDK: Microsoft.Azure.Documents.Common/1.19.121.4", "$err" : "Message: {\"Errors\":[\"Too many 'included' paths (106) specified in policy. A maximum of 100 is allowed.\"]}\r\nActivityId: 25611363-0000-0000-0000-000000000000, Request URI: /apps/bbbd93b0-83ee-44a2-9015-ca7226457764/services/63c75889-e342-42b3-81b0-4851cae426d7/partitions/89ba02e8-b034-4b75-b8a0-57194d79f785/replicas/131587440168033880p, RequestStats: , SDK: Microsoft.Azure.Documents.Common/1.19.121.4" }
at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:107) ~[spring-data-mongodb-1.10.9.RELEASE.jar:na]
at org.springframework.data.mongodb.core.index.MongoPersistentEntityIndexCreator.createIndex(MongoPersistentEntityIndexCreator.java:162) ~[spring-data-mongodb-1.10.9.RELEASE.jar:na]
at org.springframework.data.mongodb.core.index.MongoPersistentEntityIndexCreator.checkForAndCreateIndexes(MongoPersistentEntityIndexCreator.java:133) ~[spring-data-mongodb-1.10.9.RELEASE.jar:na]
at org.springframework.data.mongodb.core.index.MongoPersistentEntityIndexCreator.checkForIndexes(MongoPersistentEntityIndexCreator.java:125) ~[spring-data-mongodb-1.10.9.RELEASE.jar:na]
at org.springframework.data.mongodb.core.index.MongoPersistentEntityIndexCreator.(MongoPersistentEntityIndexCreator.java:91) ~[spring-data-mongodb-1.10.9.RELEASE.jar:na]
at org.springframework.data.mongodb.core.index.MongoPersistentEntityIndexCreator.(MongoPersistentEntityIndexCreator.java:68) ~[spring-data-mongodb-1.10.9.RELEASE.jar:na]
at org.springframework.data.mongodb.core.MongoTemplate.(MongoTemplate.java:233) ~[spring-data-mongodb-1.10.9.RELEASE.jar:na]
at org.springframework.boot.autoconfigure.data.mongo.MongoDataAutoConfiguration.mongoTemplate(MongoDataAutoConfiguration.java:101) ~[spring-boot-autoconfigure-1.5.9.RELEASE.jar:1.5.9.RELEASE]
at org.springframework.boot.autoconfigure.data.mongo.MongoDataAutoConfiguration$$EnhancerBySpringCGLIB$$7c4704f8.CGLIB$mongoTemplate$1() ~[spring-boot-autoconfigure-1.5.9.RELEASE.jar:1.5.9.RELEASE]
at org.springframework.boot.autoconfigure.data.mongo.MongoDataAutoConfiguration$$EnhancerBySpringCGLIB$$7c4704f8$$FastClassBySpringCGLIB$$e0a4d3c3.invoke() ~[spring-boot-autoconfigure-1.5.9.RELEASE.jar:1.5.9.RELEASE]
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228) ~[spring-core-4.3.13.RELEASE.jar:4.3.13.RELEASE]
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:358) ~[spring-context-4.3.13.RELEASE.jar:4.3.13.RELEASE]
at org.springframework.boot.autoconfigure.data.mongo.MongoDataAutoConfiguration$$EnhancerBySpringCGLIB$$7c4704f8.mongoTemplate() ~[spring-boot-autoconfigure-1.5.9.RELEASE.jar:1.5.9.RELEASE]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_151]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_151]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_151]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_151]
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:162) ~[spring-beans-4.3.13.RELEASE.jar:4.3.13.RELEASE]
... 47 common frames omitted
Caused by: com.mongodb.MongoCommandException: Command failed with error 2: 'Message: {"Errors":["Too many 'included' paths (106) specified in policy. A maximum of 100 is allowed."]}
ActivityId: 25611363-0000-0000-0000-000000000000, Request URI: /apps/bbbd93b0-83ee-44a2-9015-ca7226457764/services/63c75889-e342-42b3-81b0-4851cae426d7/partitions/89ba02e8-b034-4b75-b8a0-57194d79f785/replicas/131587440168033880p, RequestStats: , SDK: Microsoft.Azure.Documents.Common/1.19.121.4' on server dz5prdddc02-docdb-1.documents.azure.com:10255. The full response is { "_t" : "OKMongoResponse", "ok" : 0, "code" : 2, "errmsg" : "Message: {\"Errors\":[\"Too many 'included' paths (106) specified in policy. A maximum of 100 is allowed.\"]}\r\nActivityId: 25611363-0000-0000-0000-000000000000, Request URI: /apps/bbbd93b0-83ee-44a2-9015-ca7226457764/services/63c75889-e342-42b3-81b0-4851cae426d7/partitions/89ba02e8-b034-4b75-b8a0-57194d79f785/replicas/131587440168033880p, RequestStats: , SDK: Microsoft.Azure.Documents.Common/1.19.121.4", "$err" : "Message: {\"Errors\":[\"Too many 'included' paths (106) specified in policy. A maximum of 100 is allowed.\"]}\r\nActivityId: 25611363-0000-0000-0000-000000000000, Request URI: /apps/bbbd93b0-83ee-44a2-9015-ca7226457764/services/63c75889-e342-42b3-81b0-4851cae426d7/partitions/89ba02e8-b034-4b75-b8a0-57194d79f785/replicas/131587440168033880p, RequestStats: , SDK: Microsoft.Azure.Documents.Common/1.19.121.4" }
at com.mongodb.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:115) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.connection.CommandProtocol.execute(CommandProtocol.java:114) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:168) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:289) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.connection.DefaultServerConnection.command(DefaultServerConnection.java:176) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:216) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:207) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:146) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:139) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.operation.CreateIndexesOperation$1.call(CreateIndexesOperation.java:150) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.operation.CreateIndexesOperation$1.call(CreateIndexesOperation.java:144) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.operation.OperationHelper.withConnectionSource(OperationHelper.java:426) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:417) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.operation.CreateIndexesOperation.execute(CreateIndexesOperation.java:144) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.operation.CreateIndexesOperation.execute(CreateIndexesOperation.java:71) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.Mongo.execute(Mongo.java:845) ~[mongodb-driver-3.4.3.jar:na]
at com.mongodb.Mongo$2.execute(Mongo.java:828) ~[mongodb-driver-3.4.3.jar:na]
at com.mongodb.DBCollection.createIndex(DBCollection.java:1618) ~[mongodb-driver-3.4.3.jar:na]
at org.springframework.data.mongodb.core.index.MongoPersistentEntityIndexCreator.createIndex(MongoPersistentEntityIndexCreator.java:142) ~[spring-data-mongodb-1.10.9.RELEASE.jar:na]
... 63 common frames omitted
This error - “Too many 'included' paths (106) specified in policy. A maximum of 100 is allowed.” - occurs when you create created multiple indexes on the account and exceed the limit (100). However, you don’t have to create most of these indexes (if any) as, in contrast to MongoDB, CosmosDB automatically indexes all paths in the document, explicit indexes aren’t required. Try to exclude createIndex/ensureIndex commands and their Spring equivalents unless they have to do with creating unique indexes (you need createIndex for that as we don’t know which fields you want this constraint on).

play 2.6 evolutions postgresql : relation "play_evolutions" does not exist

I am trying to work with play evolutions and postgress but something doen't seem to work.
I have gone over the instructions in the play documentation.
It seems that play doesn't want to use the driver I specified
db.default.driver=org.postgresql.Driver
db.default.url="jdbc:postgresql://localhost/application"
db.default.logSql=true
db.default.user="postgres"
db.default.password=""
When starting the application I get the following:
12:23:58.475 [play-dev-mode-akka.actor.default-dispatcher-2] [ERROR]- org.jdbcdslog.StatementLogger(24) - java.sql.Statement.executeQuery: select id, hash, apply_script, revert_script, state, last_problem from play_evolutions where state like 'applying_%';
throws exception: org.postgresql.util.PSQLException: ERROR: relation "play_evolutions" does not exist
Position: 72
org.postgresql.util.PSQLException: ERROR: relation "play_evolutions" does not exist
Position: 72
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2182)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1911)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:173)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:645)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:481)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:361)
at com.zaxxer.hikari.pool.ProxyStatement.executeQuery(ProxyStatement.java:111)
at com.zaxxer.hikari.pool.HikariProxyStatement.executeQuery(HikariProxyStatement.java)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.jdbcdslog.StatementLoggingHandler.invoke(StatementLoggingHandler.java:27)
at com.sun.proxy.$Proxy21.executeQuery(Unknown Source)
at play.api.db.evolutions.DatabaseEvolutions.executeQuery(EvolutionsApi.scala:316)
at play.api.db.evolutions.DatabaseEvolutions.checkEvolutionsState(EvolutionsApi.scala:270)
at play.api.db.evolutions.DatabaseEvolutions.databaseEvolutions(EvolutionsApi.scala:135)
at play.api.db.evolutions.DatabaseEvolutions.scripts(EvolutionsApi.scala:110)
at play.api.db.evolutions.DatabaseEvolutions.scripts(EvolutionsApi.scala:125)
at play.api.db.evolutions.DefaultEvolutionsApi.scripts(EvolutionsApi.scala:90)
at play.api.db.evolutions.ApplicationEvolutions$$anonfun$play$api$db$evolutions$ApplicationEvolutions$$runEvolutions$1.apply$mcV$sp(ApplicationEvolutions.scala:53)
at play.api.db.evolutions.ApplicationEvolutions.withLock(ApplicationEvolutions.scala:100)
at play.api.db.evolutions.ApplicationEvolutions.play$api$db$evolutions$ApplicationEvolutions$$runEvolutions(ApplicationEvolutions.scala:49)
at play.api.db.evolutions.ApplicationEvolutions$$anonfun$start$1.apply(ApplicationEvolutions.scala:42)
at play.api.db.evolutions.ApplicationEvolutions$$anonfun$start$1.apply(ApplicationEvolutions.scala:42)
at scala.collection.immutable.List.foreach(List.scala:392)
at play.api.db.evolutions.ApplicationEvolutions.start(ApplicationEvolutions.scala:42)
at play.api.db.evolutions.ApplicationEvolutions.<init>(ApplicationEvolutions.scala:151)
at play.api.db.evolutions.ApplicationEvolutionsProvider.get$lzycompute(EvolutionsModule.scala:49)
at play.api.db.evolutions.ApplicationEvolutionsProvider.get(EvolutionsModule.scala:49)
at play.api.db.evolutions.ApplicationEvolutionsProvider.get(EvolutionsModule.scala:40)
at com.google.inject.internal.ProviderInternalFactory.provision(ProviderInternalFactory.java:81)
at com.google.inject.internal.BoundProviderFactory.provision(BoundProviderFactory.java:72)
at com.google.inject.internal.ProviderInternalFactory.circularGet(ProviderInternalFactory.java:61)
at com.google.inject.internal.BoundProviderFactory.get(BoundProviderFactory.java:62)
at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092)
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:194)
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:41)
at com.google.inject.internal.InternalInjectorCreator$1.call(InternalInjectorCreator.java:205)
at com.google.inject.internal.InternalInjectorCreator$1.call(InternalInjectorCreator.java:199)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1085)
at com.google.inject.internal.InternalInjectorCreator.loadEagerSingletons(InternalInjectorCreator.java:199)
at com.google.inject.internal.InternalInjectorCreator.injectDynamically(InternalInjectorCreator.java:180)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:110)
at com.google.inject.Guice.createInjector(Guice.java:99)
at com.google.inject.Guice.createInjector(Guice.java:84)
at play.api.inject.guice.GuiceBuilder.injector(GuiceInjectorBuilder.scala:185)
at play.api.inject.guice.GuiceApplicationBuilder.build(GuiceApplicationBuilder.scala:137)
at play.api.inject.guice.GuiceApplicationLoader.load(GuiceApplicationLoader.scala:21)
at play.core.server.DevServerStart$$anonfun$mainDev$1$$anon$1$$anonfun$1.apply(DevServerStart.scala:174)
at play.core.server.DevServerStart$$anonfun$mainDev$1$$anon$1$$anonfun$1.apply(DevServerStart.scala:171)
at play.utils.Threads$.withContextClassLoader(Threads.scala:21)
at play.core.server.DevServerStart$$anonfun$mainDev$1$$anon$1.reload(DevServerStart.scala:171)
at play.core.server.DevServerStart$$anonfun$mainDev$1$$anon$1.get(DevServerStart.scala:124)
at play.core.server.AkkaHttpServer.play$core$server$AkkaHttpServer$$modelConversion(AkkaHttpServer.scala:183)
at play.core.server.AkkaHttpServer.play$core$server$AkkaHttpServer$$handleRequest(AkkaHttpServer.scala:189)
at play.core.server.AkkaHttpServer$$anonfun$5.apply(AkkaHttpServer.scala:106)
at play.core.server.AkkaHttpServer$$anonfun$5.apply(AkkaHttpServer.scala:106)
at akka.stream.impl.fusing.MapAsync$$anon$23.onPush(Ops.scala:1172)
at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:499)
at akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:462)
at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:368)
at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:571)
at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:457)
at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:546)
at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:725)
at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:740)
at akka.actor.Actor$class.aroundReceive(Actor.scala:514)
at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:650)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:527)
at akka.actor.ActorCell.invoke(ActorCell.scala:496)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
at akka.dispatch.Mailbox.run(Mailbox.scala:224)
at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
12:23:58.543 [play-dev-mode-akka.actor.default-dispatcher-2] [INFO]- org.jdbcdslog.StatementLogger(10) - java.sql.Statement.execute:
create table play_evolutions (
id int not null primary key,
hash varchar(255) not null,
applied_at timestamp not null,
apply_script text,
revert_script text,
state varchar(255),
last_problem text
)
;
this is correct behavior, because it is try-catch style for check table exists. In end of log you can see message about table creation after exists checking
12:23:58.543 [play-dev-mode-akka.actor.default-dispatcher-2] [INFO]- org.jdbcdslog.StatementLogger(10) - java.sql.Statement.execute:
create table play_evolutions (
id int not null primary key,
hash varchar(255) not null,
applied_at timestamp not null,
apply_script text,
revert_script text,
state varchar(255),
last_problem text
)
;
usually you must ignore this error

KafkaSpout tuple replay throws null pointer exception

I am using storm 1.0.1 and Kafka 0.10.0.0 with storm-kafka-client 1.0.3.
please find the code config I have below.
kafkaConsumerProps.put(KafkaSpoutConfig.Consumer.KEY_DESERIALIZER, "org.apache.kafka.common.serialization.ByteArrayDeserializer");
kafkaConsumerProps.put(KafkaSpoutConfig.Consumer.VALUE_DESERIALIZER, "org.apache.kafka.common.serialization.ByteArrayDeserializer");
KafkaSpoutStreams kafkaSpoutStreams = new KafkaSpoutStreamsNamedTopics.Builder(new Fields(fieldNames), topics)
.build();
KafkaSpoutRetryService retryService = new KafkaSpoutRetryExponentialBackoff(TimeInterval.microSeconds(500),
TimeInterval.milliSeconds(2), Integer.MAX_VALUE, TimeInterval.seconds(10));
KafkaSpoutTuplesBuilder tuplesBuilder = new KafkaSpoutTuplesBuilderNamedTopics.Builder(new TestTupleBuilder(topics))
.build();
KafkaSpoutConfig kafkaSpoutConfig = new KafkaSpoutConfig.Builder<String, String>(kafkaConsumerProps, kafkaSpoutStreams, tuplesBuilder, retryService)
.setOffsetCommitPeriodMs(10_000)
.setFirstPollOffsetStrategy(LATEST)
.setMaxRetries(5)
.setMaxUncommittedOffsets(250)
.build();
When I fail the tuple its not getting replayed. Spout throws below error.
Please let me know why it's throwing nullpointer exception.
53501 [Thread-359-test-spout-executor[295 295]] ERROR o.a.s.util - Async loop died!
java.lang.NullPointerException
at org.apache.storm.kafka.spout.KafkaSpout.doSeekRetriableTopicPartitions(KafkaSpout.java:260) ~[storm-kafka-client-1.0.3.jar:1.0.3]
at org.apache.storm.kafka.spout.KafkaSpout.pollKafkaBroker(KafkaSpout.java:248) ~[storm-kafka-client-1.0.3.jar:1.0.3]
at org.apache.storm.kafka.spout.KafkaSpout.nextTuple(KafkaSpout.java:203) ~[storm-kafka-client-1.0.3.jar:1.0.3]
at org.apache.storm.daemon.executor$fn__7885$fn__7900$fn__7931.invoke(executor.clj:645) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.8.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_102]
53501 [Thread-359-test-spout-executor[295 295]] ERROR o.a.s.d.executor -
java.lang.NullPointerException
at org.apache.storm.kafka.spout.KafkaSpout.doSeekRetriableTopicPartitions(KafkaSpout.java:260) ~[storm-kafka-client-1.0.3.jar:1.0.3]
at org.apache.storm.kafka.spout.KafkaSpout.pollKafkaBroker(KafkaSpout.java:248) ~[storm-kafka-client-1.0.3.jar:1.0.3]
at org.apache.storm.kafka.spout.KafkaSpout.nextTuple(KafkaSpout.java:203) ~[storm-kafka-client-1.0.3.jar:1.0.3]
at org.apache.storm.daemon.executor$fn__7885$fn__7900$fn__7931.invoke(executor.clj:645) ~[storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:484) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.8.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_102]
53527 [Thread-359-test-spout-executor[295 295]] ERROR o.a.s.util - Halting process: ("Worker died")
java.lang.RuntimeException: ("Worker died")
at org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.8.0.jar:?]
at org.apache.storm.daemon.worker$fn__8554$fn__8555.invoke(worker.clj:761) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.daemon.executor$mk_executor_data$fn__7773$fn__7774.invoke(executor.clj:271) [storm-core-1.0.1.jar:1.0.1]
at org.apache.storm.util$async_loop$fn__625.invoke(util.clj:494) [storm-core-1.0.1.jar:1.0.1]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.8.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_102]
Please find the complete spout configs below
{key.deserializer=org.apache.kafka.common.serialization.ByteArrayDeserializer, value.deserializer=org.apache.kafka.common.serialization.ByteArrayDeserializer, group.id=test-group, ssl.keystore.location=C:/test.jks, bootstrap.servers=localhost:1000, auto.commit.interval.ms=1000, security.protocol=SSL, enable.auto.commit=true, ssl.truststore.location=C:/test1.jks, ssl.keystore.password=pass123, ssl.key.password=pass123, ssl.truststore.password=pass123, session.timeout.ms=30000, auto.offset.reset=latest}
Storm 1.0.1 consists of storm-kafka-client in beta quality. We have fixed few issues and more stable version is available in Storm 1.1 release and can be used against Kafka 0.10 onwards.
In your topology you can make dependency against storm-kafka-client version 1.1 and kafka-clients dependency with appropriate version. You don't need to upgrade storm cluster itself.
I had enable.auto.commit=true making the value as false resolved the issue for me.