how to implement spring feign post and delete - spring-cloud

I build a spring cloud service ,contains eureka, user-service(spring-data-rest user api) and a feign-client service .
In the feign client :
#FeignClient("useraccount")
public interface UserFeign {
#RequestMapping(method=RequestMethod.POST,value="/users",consumes = "application/json")
void createUser(#RequestBody User user);
#RequestMapping(method=RequestMethod.DELETE,value="/users/{id}")
void delById (#PathVariable("id") String id);
I want to implement removing and storing users in feign-client by calling user-service api . So , i create a rest controller (js Transfer data to them) :
#Autowired
private UserFeign userFeign;
//save controller
#RequestMapping(method = RequestMethod.POST, value = "/property/register")
public ResponseEntity<?> createUser(#RequestBody User user) {
userSaveFeign.createUser(user);
return ResponseEntity.ok();
}
// and delete controller
#RequestMapping(method = RequestMethod.DELETE, value = "/property/{id}")
public String hello(#PathVariable("id") String id){
userSaveFeign.delById(id);
}
return "hello";
}
but it always met errors :
2016-04-16 20:05:41.162 .DynamicServerListLoadBalancer DynamicServerListLoadBalancer for client useraccount initialized: DynamicServerListLoadBalancer:{NFLoadBalancer:name=useraccount,current list of Servers=[192.168.1.101:useraccount:d3fb971b6fe30dc5e9cbfdf0e713cd12],Load balancer stats=Zone stats: {defaultzone=[Zone:defaultzone; Instance count:1; Active connections count: 0; Circuit breaker tripped count: 0; Active connections per server: 0.0;]
},Server stats: [[Server:192.168.1.101:useraccount:d3fb971b6fe30dc5e9cbfdf0e713cd12; Zone:defaultZone; Total Requests:0; Successive connection failure:0; Total blackout seconds:0; Last connection made:Thu Jan 01 08:00:00 CST 1970; First connection made: Thu Jan 01 08:00:00 CST 1970; Active Connections:0; total failure count in last (1000) msecs:0; average resp time:0.0; 90 percentile resp time:0.0; 95 percentile resp time:0.0; min resp time:0.0; max resp time:0.0; stddev resp time:0.0]
]}ServerList:org.springframework.cloud.netflix.ribbon.eureka.DomainExtractingServerList#19c54b19
2016-04-16 20:05:41.836[2m[nio-8002-exec-4][36mo.a.c.c.C.[.[.[/].[dispatcherServlet] [0;39m Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is com.netflix.hystrix.exception.HystrixRuntimeException: createUser timed-out and no fallback available.] with root cause
java.util.concurrent.TimeoutException: null
at com.netflix.hystrix.AbstractCommand$9.call(AbstractCommand.java:600) ~[hystrix-core-1.5.1.jar:1.5.1]
at com.netflix.hystrix.AbstractCommand$9.call(AbstractCommand.java:580) ~[hystrix-core-1.5.1.jar:1.5.1]
at rx.internal.operators.OperatorOnErrorResumeNextViaFunction$1.onError(OperatorOnErrorResumeNextViaFunction.java:99) ~[rxjava-1.0.14.jar:1.0.14]
at rx.internal.operators.OperatorDoOnEach$1.onError(OperatorDoOnEach.java:70) ~[rxjava-1.0.14.jar:1.0.14]
at rx.internal.operators.OperatorDoOnEach$1.onError(OperatorDoOnEach.java:70) ~[rxjava-1.0.14.jar:1.0.14]
at com.netflix.hystrix.AbstractCommand$HystrixObservableTimeoutOperator$1.run(AbstractCommand.java:955) ~[hystrix-core-1.5.1.jar:1.5.1]
at com.netflix.hystrix.strategy.concurrency.HystrixContextRunnable$1.call(HystrixContextRunnable.java:41) ~[hystrix-core-1.5.1.jar:1.5.1]
at com.netflix.hystrix.strategy.concurrency.HystrixContextRunnable$1.call(HystrixContextRunnable.java:37) ~[hystrix-core-1.5.1.jar:1.5.1]
at com.netflix.hystrix.strategy.concurrency.HystrixContextRunnable.run(HystrixContextRunnable.java:57) ~[hystrix-core-1.5.1.jar:1.5.1]
at com.netflix.hystrix.AbstractCommand$HystrixObservableTimeoutOperator$2.tick(AbstractCommand.java:972) ~[hystrix-core-1.5.1.jar:1.5.1]
at com.netflix.hystrix.util.HystrixTimer$1.run(HystrixTimer.java:99) ~[hystrix-core-1.5.1.jar:1.5.1]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_40]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) ~[na:1.8.0_40]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) ~[na:1.8.0_40]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) ~[na:1.8.0_40]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_40]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_40]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
Maybe the above store and delete method have problems ,but who can tell me the right or better ?

It looks like your Hystrix Command is timing out. Assuming you don't specifically have any Hystrix Commands (as not mentioned in your post), Hystrix is a circuit breaker technology that is included free with Feign Client. You can disable this with:
feign.hystrix.enabled=false
(And you can find more information about Feign-Hystrix integration at http://cloud.spring.io/spring-cloud-netflix/spring-cloud-netflix.html#spring-cloud-feign-hystrix)
This won't address the route cause that something is taking a long time to complete. You will need to debug and find out whats hanging. It could be the actual endpoint your Feign client is pointed at is connecting but is not responsive, although I would usually expect to see a specific connection timeout exception in that case.

This is not the actual exception, you should find the reason why its timedout. Enable logging, then you can see why it is failing:
import org.springframework.context.annotation.Bean;
import feign.Logger;
public class FeignConfiguration {
#Bean
Logger.Level feignLoggerLevel() {
return Logger.Level.FULL;
}
}
Also look if the server is resolved at all. Your log should have something like below:
DynamicServerListLoadBalancer for client auth-service initialized: DynamicServerListLoadBalancer:{NFLoadBalancer:name=auth-service,current list of Servers=[192.168.2.243:7277],Load balancer stats=Zone stats: {defaultzone=[Zone:defaultzone; Instance count:1; Active connections count: 0; Circuit breaker tripped count: 0; Active connections per server: 0.0;]
},Server stats: [[Server:192.168.2.243:8180; Zone:defaultZone; Total Requests:0; Successive connection failure:0; Total blackout seconds:0; Last connection made:Thu Jan 01 06:00:00 BDT 1970; First connection made: Thu Jan 01 06:00:00 BDT 1970; Active Connections:0; total failure count in last (1000) msecs:0; average resp time:0.0; 90 percentile resp time:0.0; 95 percentile resp time:0.0; min resp time:0.0; max resp time:0.0; stddev resp time:0.0]
]}ServerList:org.springframework.cloud.netflix.ribbon.eureka.DomainExtractingServerList#9481a7c

Related

TEIID40007 Keepalive failed for session

I am trying to connect to federated vdb via Spotfire. I keep getting following error after fetching 200K plus records. There total of 3M records in the view
TEIID session timeout has no limit.
TEIID40007 Keepalive failed for session E0NBLogYqzk3
I am not experiencing the above error when I try to read the vdb in sql client like dbeaver or squirrel and can fetch entire dataset.
Below is the snippet from server.log in Jboss:
setup [PolicyOutInterceptor]
pre-logical [ClientRequestFilterInterceptor]
prepare-send [MessageSenderInterceptor]
write [BodyWriter]
prepare-send-ending [MessageSenderEndingInterceptor]
2022-09-08 12:01:06,358 FINE [org.apache.cxf.phase.PhaseInterceptorChain] (Worker1579_QueryProcessorQueue29181462) Invoking handleMessage on interceptor org.apache.cxf.jaxrs.client.WebClient$BodyWriter#511fa86c
2022-09-08 12:01:06,358 FINE [org.apache.cxf.phase.PhaseInterceptorChain] (Worker1579_QueryProcessorQueue29181462) Invoking handleMessage on interceptor org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor#4a217e2f
2022-09-08 12:01:06,359 FINE [org.apache.cxf.transport.http.Headers] (Worker1579_QueryProcessorQueue29181462) Accept: application/json
2022-09-08 12:01:06,359 FINE [org.apache.cxf.transport.http.Headers] (Worker1579_QueryProcessorQueue29181462) User-Agent: Teiid Server
2022-09-08 12:01:06,359 FINE [org.apache.cxf.transport.http.Headers] (Worker1579_QueryProcessorQueue29181462) Content-length: 0
2022-09-08 12:01:06,359 FINE [org.apache.cxf.transport.http.Headers] (Worker1579_QueryProcessorQueue29181462) Content-Type: application/json
2022-09-08 12:01:06,359 FINE [org.apache.cxf.transport.http.HTTPConduit] (Worker1579_QueryProcessorQueue29181462) No Trust Decider for Conduit '{http://solrserver.net:8983/solr/document/select}WebClient.http-conduit'. An afirmative Trust Decision is assumed.
2022-09-08 12:01:06,359 FINE [org.apache.cxf.transport.http.HTTPConduit] (Worker1579_QueryProcessorQueue29181462) Sending POST Message with Headers to http://solrserver.net:8983/solr/document/select Conduit :{http://solrserver.net:8983/solr/document/select}WebClient.http-conduit
2022-09-08 12:01:06,359 INFO [org.teiid.SECURITY] (SessionMonitor) TEIID40007 Keepalive failed for session E0NBLogYqzk3
2022-09-08 12:01:06,359 INFO [org.teiid.SECURITY] (SessionMonitor) TEIID40007 Keepalive failed for session E0NBLogYqzk3
2022-09-08 12:01:06,359 DEBUG [org.teiid.SECURITY] (SessionMonitor) closeSession E0NBLogYqzk3
2022-09-08 12:01:06,359 DEBUG [org.teiid.SECURITY] (SessionMonitor) closeSession E0NBLogYqzk3
2022-09-08 12:01:06,359 DEBUG [org.teiid.AUDIT_LOG] (SessionMonitor) [svcpds] <session.logoff>
2022-09-08 12:01:06,359 DEBUG [org.teiid.AUDIT_LOG] (SessionMonitor) [svcpds] <session.logoff>
2022-09-08 12:01:06,359 DEBUG [org.teiid.COMMAND_LOG] (SessionMonitor) CANCEL SRC COMMAND: endTime=2022-09-08 12:01:06.359 requestID=E0NBLogYqzk3.5 sourceCommandID=7 executionID=8950651 txID=null modelName=PDS_SOURCE_MODEL translatorName=delegate sessionID=E0NBLogYqzk3 principal=svcpds finalRowCount=-1
2022-09-08 12:01:06,359 DEBUG [org.teiid.COMMAND_LOG] (SessionMonitor) CANCEL SRC COMMAND: endTime=2022-09-08 12:01:06.359 requestID=E0NBLogYqzk3.5 sourceCommandID=7 executionID=8950651 txID=null modelName=PDS_SOURCE_MODEL translatorName=delegate sessionID=E0NBLogYqzk3 principal=svcpds finalRowCount=-1
2022-09-08 12:01:06,360 FINE [org.apache.cxf.phase.PhaseInterceptorChain] (Worker1579_QueryProcessorQueue29181462) Adding interceptor org.apache.cxf.ws.policy.PolicyInInterceptor#3404a1c0 to phase receive
2022-09-08 12:01:06,360 FINE [org.apache.cxf.phase.PhaseInterceptorChain] (Worker1579_QueryProcessorQueue29181462) Adding interceptor org.apache.cxf.jaxrs.client.WebClient$ClientAsyncResponseInterceptor#6f403813 to phase unmarshal
2022-09-08 12:01:06,360 DEBUG [org.teiid.COMMAND_LOG] (SessionMonitor) CANCEL SRC COMMAND: endTime=2022-09-08 12:01:06.36 requestID=E0NBLogYqzk3.5 sourceCommandID=3 executionID=9226341 txID=null modelName=GF translatorName=rest sessionID=E0NBLogYqzk3 principal=svcpds finalRowCount=-1
2022-09-08 12:01:06,360 FINE [org.apache.cxf.phase.PhaseInterceptorChain] (Worker1579_QueryProcessorQueue29181462) Adding interceptor org.apache.cxf.jaxrs.client.spec.ClientResponseFilterInterceptor#43b0a15f to phase pre-protocol-frontend
2022-09-08 12:01:06,360 FINE [org.apache.cxf.phase.PhaseInterceptorChain] (Worker1579_QueryProcessorQueue29181462) Chain org.apache.cxf.phase.PhaseInterceptorChain#5af24963 was created. Current flow:
receive [PolicyInInterceptor]
pre-protocol-frontend [ClientResponseFilterInterceptor]
unmarshal [ClientAsyncResponseInterceptor]
Below is the error I am getting on Spotfire client:
ImportException at Spotfire.Dxp.Data:
Failed to create DataTable (HRESULT: 80131500)
Stack Trace:
at Spotfire.Dxp.Data.ColumnFactory.CreateColumns(DataRowReader reader, String documentTitleForOrigin, IDataPropertyContainer defaultProperties, DataPropertyRegistry dataPropertyRegistry, GlobalMethodRegistry globalMethodRegistry, CxxSession session, Boolean addNewProperties, PartialDataLoadReport loadReport, ResultProperties resultProperties, PendingViewRequestsManager pendingViewRequestsManager, Boolean mangleColumnNames)
at Spotfire.Dxp.Data.Producers.SourceColumnProducer.<>c__DisplayClass75_0.<CreateView>b__0()
at Spotfire.Dxp.Framework.ApplicationModel.Progress.ExecuteSubtask(String title, ProgressOperation operation)
at Spotfire.Dxp.Data.Producers.SourceColumnProducer.CreateView(CxxSession session, DataPropertyRegistry propertyRegistry, GlobalMethodRegistry globalMethodRegistry, DataSourceConnection connection, IDataPropertyContainer defaultColumnProperties, PartialDataLoadReport& partialLoadReport)
at Spotfire.Dxp.Data.Producers.SourceColumnProducer.GetColumnsAndProperties(DataSourceConnection connection)
at Spotfire.Dxp.Data.Persistence.DataItem.PerformUpdate(SourceColumnProducer producer, DataSourceConnection connection)
at Spotfire.Dxp.Data.Persistence.DataItem.Update(SourceColumnProducer producer, DataSourceConnection connection)
at Spotfire.Dxp.Data.Persistence.DataPool.<LoadData>d__15.MoveNext()
at Spotfire.Dxp.Data.Producers.SourceColumnProducer.OnConfigure()
at Spotfire.Dxp.Framework.DocumentModel.Node.ConfigureSubTree()
at Spotfire.Dxp.Framework.DocumentModel.Node.<>c.<ConfigureSubTree>b__47_0(Node node)
at Spotfire.Dxp.Framework.DocumentModel.UndoableListAvlLeaf`1.ForEachChild(Action`1 action, Boolean includeFrozen)
at Spotfire.Dxp.Framework.DocumentModel.Node.ConfigureSubTree()
at Spotfire.Dxp.Framework.DocumentModel.Node.<>c.<ConfigureSubTree>b__47_0(Node node)
at Spotfire.Dxp.Framework.DocumentModel.UndoableList`1.ForEachChild(Action`1 action, Boolean includeFrozen)
at Spotfire.Dxp.Framework.DocumentModel.Node.ConfigureSubTree()
at Spotfire.Dxp.Framework.DocumentModel.Node.<>c.<ConfigureSubTree>b__47_0(Node node)
at Spotfire.Dxp.Framework.DocumentModel.State.NodeState.<>c__DisplayClass92_0.<ForEachManagedChild>b__0(IDocumentNodeChild documentNodeChild)
at Spotfire.Dxp.Framework.DocumentModel.State.NodeState.ForEachChild(IDocumentNodeChild[] children, Action`1 action)
at Spotfire.Dxp.Framework.DocumentModel.Node.ConfigureSubTree()
at Spotfire.Dxp.Framework.ApplicationModel.Progress.ExecuteSubtask(String title, IndeterminateProgressFormatter progressFormatter, ProgressOperation operation)
at Spotfire.Dxp.Framework.DocumentModel.DocumentNode.ConfigureAndAttachFromNew()
at Spotfire.Dxp.Framework.DocumentModel.DocumentNode.AttachSubTreeWhileExecuting(UndoableNodeBase newOwner)
at Spotfire.Dxp.Framework.DocumentModel.UndoableNode.Spotfire.Dxp.Framework.DocumentModel.IUndoableNode.AttachItemToUndoableNode(Object item)
at Spotfire.Dxp.Framework.DocumentModel.UndoableKeyedCollection`2.<>c__DisplayClass43_0.<Insert>b__0()
at Spotfire.Dxp.Framework.DocumentModel.Node.InternalTransaction(Executor executor, Boolean rollbackNestedInternalTransactionAtException, Boolean isStreamingProperty)
at Spotfire.Dxp.Framework.DocumentModel.UndoableKeyedCollection`2.Insert(Int32 index, TNode item)
at Spotfire.Dxp.Data.DataTableCollection.<>c__DisplayClass89_0.<Add>b__0()
at Spotfire.Dxp.Framework.DocumentModel.Node.InternalTransaction(Executor executor, Boolean rollbackNestedInternalTransactionAtException, Boolean isStreamingProperty)
at Spotfire.Dxp.Data.DataTableCollection.Add(DataTable dataTable)
at Spotfire.Dxp.Application.PartiallyOpenedDataSource.<LoadData>d__9.MoveNext()
at Spotfire.Dxp.Application.AnalysisApplication.<OpenPartiallyOpenedDocument>d__98.MoveNext()
at Spotfire.Dxp.Application.AnalysisApplication.<OpenDataSource>d__91.MoveNext()
at Spotfire.Dxp.Application.AnalysisApplication.ConsumeDataLoadPromptRequests(IEnumerable`1 prompts)
at Spotfire.Dxp.Application.AnalysisApplication.Open(DataSource source, DocumentOpenSettings settings)
at Spotfire.Dxp.Forms.Data.Import.DataSourceFactoryService.OpenDataSourceWithoutPrompting(DataSource dataSource, DocumentOpenSettings documentOpenSettings, IServiceProvider serviceProvider)
at Spotfire.Dxp.Forms.Data.Import.DataSourceFactoryService.OpenDataSource(InformationLinkDataSource dataSource, DocumentOpenSettings documentOpenSettings, IServiceProvider serviceProvider)
at Spotfire.Dxp.Framework.ApplicationModel.Progress.<>c__DisplayClass21_0.<Start>b__0()
at Spotfire.Dxp.Framework.ApplicationModel.MonitorableProgress.Start[T](Func`1 action)
at Spotfire.Dxp.Forms.Application.FormsProgressService.ProgressThread.DoOperationLoop()
InformationModelException at Spotfire.Dxp.Data:
Failed to get data: 57014 TEIID30160 The request /IK4StVK70Kl.1 has been cancelled. (HRESULT: 80131500)
Stack Trace:
at Spotfire.Dxp.Data.InformationModel.InternalInformationModelManager.DataStream.GetNextBlock()
at Spotfire.Dxp.Data.InformationModel.InternalInformationModelManager.DataStream.Read(Byte[] buffer, Int32 offset, Int32 count)
at Spotfire.Dxp.Internal.Utilities.SeekableStream.Read(Byte[] buffer, Int32 offset, Int32 count)
at Spotfire.Dxp.Framework.ApplicationModel.ProgressIncrementStream.Read(Byte[] buffer, Int32 offset, Int32 count)
at Spotfire.Dxp.Internal.Utilities.SharedMemoryStream.CopyUnprotected(Stream inputStream, Int32 bufferSize)
at Spotfire.Dxp.Data.Cxx.CxxColumnManager.LoadSBDF(String id, Stream shm, Action streamWriter, Action abortAction, PartialTableCallback partialTableCallback, Boolean useFasterWipImplementation)
at Spotfire.Dxp.Data.Cxx.CxxDataTransfer.LoadSBDF(CxxSession session, Stream stream, PartialTableCallback partialTableCallback, Boolean useFasterSBDF)
at Spotfire.Dxp.Data.Import.SbdfDataRowReader.TryCreateCxxRepresentation(CxxSession session, Int64 exclusiveStartRowIndex, UInt64 maxRowsThatWillBeRead, PendingViewRequestsManager pendingViewRequestsManager, CxxTable& table)
at Spotfire.Dxp.Data.Cxx.CxxDataTransfer.CreateTable(DataRowReader dataRowReader, CxxSession session, PartialDataLoadReport report, Advancer rowAdvancer, Boolean needsReset, Int64 exclusiveStartRowIndex, UInt64 maxRowsThatWillBeRead, PendingViewRequestsManager partiallyLoadedConsumers)
at Spotfire.Dxp.Data.Cxx.CxxDataTransfer.CreateTable(DataRowReader dataRowReader, CxxSession session, PartialDataLoadReport report, PendingViewRequestsManager pendingViewRequestsManager, Boolean needsReset, UInt64 maxRowsToConsume)
at Spotfire.Dxp.Data.ColumnFactory.CreateColumns(DataRowReader reader, String documentTitleForOrigin, IDataPropertyContainer defaultProperties, DataPropertyRegistry dataPropertyRegistry, GlobalMethodRegistry globalMethodRegistry, CxxSession session, Boolean addNewProperties, PartialDataLoadReport loadReport, ResultProperties resultProperties, PendingViewRequestsManager pendingViewRequestsManager, Boolean mangleColumnNames)
InformationModelServiceException at Spotfire.Dxp.Services:
Failed to get data: 57014 TEIID30160 The request /IK4StVK70Kl.1 has been cancelled. (HRESULT: 80131509)
Stack Trace:
at Spotfire.Dxp.Services.WebServiceBase`1.InvokeService[T](ServiceMethod`1 serviceMethod, ExceptionFactoryMethod exceptionFactoryMethod, String customMethodNameForLogging)
at Spotfire.Dxp.Services.Data.InformationModel.QueryManagerService.InvokeService[T](ServiceMethod`1 serviceMethod)
at Spotfire.Dxp.Data.InformationModel.InternalInformationModelManager.DataStream.GetNextBlock()
Appreciate the help. Thanks

Infinispan with hanging threads in level 1 clean up

We are trying to upgrade from Wildfly 18 to 22 and are experiencing some trouble with threads left dangling in the jgroups thread group. When the number of threads reaches the default max number (200) everything stops up (naturally). This takes two to three days in our test environment.
In the thread dump we are left with 200 dangling threads that look like this:
at jdk.internal.misc.Unsafe.park(java.base#11.0.8/Native Method)
at java.util.concurrent.locks.LockSupport.parkNanos(java.base#11.0.8/LockSupport.java:234)
at java.util.concurrent.CompletableFuture$Signaller.block(java.base#11.0.8/CompletableFuture.java:1798)
at java.util.concurrent.ForkJoinPool.managedBlock(java.base#11.0.8/ForkJoinPool.java:3128)
at java.util.concurrent.CompletableFuture.timedGet(java.base#11.0.8/CompletableFuture.java:1868)
at java.util.concurrent.CompletableFuture.get(java.base#11.0.8/CompletableFuture.java:2021)
at org.infinispan.util.concurrent.CompletableFutures.await(CompletableFutures.java:125)
at org.infinispan.util.concurrent.CompletionStages.join(CompletionStages.java:80)
at org.infinispan.interceptors.distribution.L1NonTxInterceptor.lambda$handleDataWriteCommand$10(L1NonTxInterceptor.java:399)
at org.infinispan.interceptors.distribution.L1NonTxInterceptor$$Lambda$1806/0x0000000841ace040.apply(Unknown Source)
at org.infinispan.interceptors.impl.QueueAsyncInvocationStage.invokeQueuedHandlers(QueueAsyncInvocationStage.java:125)
at org.infinispan.interceptors.impl.QueueAsyncInvocationStage.accept(QueueAsyncInvocationStage.java:88)
at org.infinispan.interceptors.impl.QueueAsyncInvocationStage.accept(QueueAsyncInvocationStage.java:33)
at java.util.concurrent.CompletableFuture.uniWhenComplete(java.base#11.0.8/CompletableFuture.java:859)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(java.base#11.0.8/CompletableFuture.java:837)
at java.util.concurrent.CompletableFuture.postComplete(java.base#11.0.8/CompletableFuture.java:506)
at java.util.concurrent.CompletableFuture.complete(java.base#11.0.8/CompletableFuture.java:2073)
at org.infinispan.interceptors.distribution.PrimaryOwnerOnlyCollector.primaryResult(PrimaryOwnerOnlyCollector.java:30)
at org.infinispan.interceptors.distribution.TriangleDistributionInterceptor.lambda$forwardToPrimary$6(TriangleDistributionInterceptor.java:526)
at org.infinispan.interceptors.distribution.TriangleDistributionInterceptor$$Lambda$1953/0x0000000841d51040.apply(Unknown Source)
at java.util.concurrent.CompletableFuture.uniHandle(java.base#11.0.8/CompletableFuture.java:930)
at java.util.concurrent.CompletableFuture$UniHandle.tryFire(java.base#11.0.8/CompletableFuture.java:907)
at java.util.concurrent.CompletableFuture.postComplete(java.base#11.0.8/CompletableFuture.java:506)
at java.util.concurrent.CompletableFuture.complete(java.base#11.0.8/CompletableFuture.java:2073)
at org.infinispan.remoting.transport.AbstractRequest.complete(AbstractRequest.java:67)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:45)
at org.infinispan.remoting.transport.impl.RequestRepository.addResponse(RequestRepository.java:52)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1402)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1305)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$300(JGroupsTransport.java:131)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1445)
at org.jgroups.JChannel.up(JChannel.java:784)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:913)
at org.jgroups.protocols.FRAG3.up(FRAG3.java:165)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:876)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:243)
at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1049)
at org.jgroups.protocols.UNICAST3.addMessage(UNICAST3.java:772)
at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:753)
at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:405)
at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:592)
at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:132)
at org.jgroups.protocols.FailureDetection.up(FailureDetection.java:186)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:254)
at org.jgroups.protocols.MERGE3.up(MERGE3.java:281)
at org.jgroups.protocols.Discovery.up(Discovery.java:300)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1396)
at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:87)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base#11.0.8/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base#11.0.8/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base#11.0.8/Thread.java:834)
These threads are hanging on a CompletableFuture.get(...) with a infinispan-hardcoded timeout of 24 hours. This is probably not something that should happen, which is confirmed by the exception we get when the timeout is finally reached:
2021-03-03 09:00:46,638 ERROR [org.infinispan.interceptors.impl.InvocationContextInterceptor] (jgroups-263,xxxxxxxxxxxx) ISPN000136: Error executing command ReadWriteKeyCommand on Cache 'sessionmanager.session', writing keys [xxxx#xxxxxx]: java.lang.IllegalStateException: This should never happen!
at org.infinispan#11.0.8.Final//org.infinispan.util.concurrent.CompletableFutures.await(CompletableFutures.java:127)
at org.infinispan#11.0.8.Final//org.infinispan.util.concurrent.CompletionStages.join(CompletionStages.java:80)
at org.infinispan#11.0.8.Final//org.infinispan.interceptors.distribution.L1NonTxInterceptor.lambda$handleDataWriteCommand$10(L1NonTxInterceptor.java:399)
at org.infinispan#11.0.8.Final//org.infinispan.interceptors.impl.QueueAsyncInvocationStage.invokeQueuedHandlers(QueueAsyncInvocationStage.java:125)
at org.infinispan#11.0.8.Final//org.infinispan.interceptors.impl.QueueAsyncInvocationStage.accept(QueueAsyncInvocationStage.java:88)
at org.infinispan#11.0.8.Final//org.infinispan.interceptors.impl.QueueAsyncInvocationStage.accept(QueueAsyncInvocationStage.java:33)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at org.infinispan#11.0.8.Final//org.infinispan.interceptors.distribution.PrimaryOwnerOnlyCollector.primaryResult(PrimaryOwnerOnlyCollector.java:30)
at org.infinispan#11.0.8.Final//org.infinispan.interceptors.distribution.TriangleDistributionInterceptor.lambda$forwardToPrimary$6(TriangleDistributionInterceptor.java:526)
at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930)
at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:907)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at org.infinispan#11.0.8.Final//org.infinispan.remoting.transport.AbstractRequest.complete(AbstractRequest.java:67)
at org.infinispan#11.0.8.Final//org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:45)
at org.infinispan#11.0.8.Final//org.infinispan.remoting.transport.impl.RequestRepository.addResponse(RequestRepository.java:52)
at org.infinispan#11.0.8.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1402)
at org.infinispan#11.0.8.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1305)
at org.infinispan#11.0.8.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$300(JGroupsTransport.java:131)
at org.infinispan#11.0.8.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1445)
at org.jgroups#4.2.10.Final//org.jgroups.JChannel.up(JChannel.java:784)
at org.jgroups#4.2.10.Final//org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:913)
at org.jgroups#4.2.10.Final//org.jgroups.protocols.FRAG3.up(FRAG3.java:165)
at org.jgroups#4.2.10.Final//org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
at org.jgroups#4.2.10.Final//org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
at org.jgroups#4.2.10.Final//org.jgroups.protocols.pbcast.GMS.up(GMS.java:876)
at org.jgroups#4.2.10.Final//org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:243)
at org.jgroups#4.2.10.Final//org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1049)
at org.jgroups#4.2.10.Final//org.jgroups.protocols.UNICAST3.addMessage(UNICAST3.java:772)
at org.jgroups#4.2.10.Final//org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:753)
at org.jgroups#4.2.10.Final//org.jgroups.protocols.UNICAST3.up(UNICAST3.java:405)
at org.jgroups#4.2.10.Final//org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:592)
at org.jgroups#4.2.10.Final//org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:132)
at org.jgroups#4.2.10.Final//org.jgroups.protocols.FailureDetection.up(FailureDetection.java:186)
at org.jgroups#4.2.10.Final//org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:254)
at org.jgroups#4.2.10.Final//org.jgroups.protocols.MERGE3.up(MERGE3.java:281)
at org.jgroups#4.2.10.Final//org.jgroups.protocols.Discovery.up(Discovery.java:300)
at org.jgroups#4.2.10.Final//org.jgroups.protocols.TP.passMessageUp(TP.java:1396)
at org.jgroups#4.2.10.Final//org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:87)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.util.concurrent.TimeoutException
at java.base/java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1886)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2021)
at org.infinispan#11.0.8.Final//org.infinispan.util.concurrent.CompletableFutures.await(CompletableFutures.java:125)
... 44 more
Have anyone encountered a similar problem? What can possibly lead to this situation?

Curator Leader election give connection refused error

I implemented curator leader election example which is given in this site
Instead of having number of curator clients I added only one curator client as follows
public void selectLeader() {
CuratorFramework client = null;
try {
client = CuratorFrameworkFactory.newClient("localhost:2181", new ExponentialBackoffRetry(1000, 3));
LeaderSelectorService service = new LeaderSelectorService(client, "/leaderSelections", "LeaderElector");
client.start();
Thread.sleep(10000);
service.start();
} catch (Exception e) {
System.out.println("error"+e);
}finally
{
System.out.println("Shutting down...");
// CloseableUtils.closeQuietly(client);
}
}
public class LeaderSelectorService extends LeaderSelectorListenerAdapter implements Closeable {
private final String name;
private final LeaderSelector leaderSelector;
public LeaderSelectorService(CuratorFramework client, String path, String name) {
this.name = name;
// create a leader selector using the given path for management
// all participants in a given leader selection must use the same path
// ExampleClient here is also a LeaderSelectorListener but this isn't required
leaderSelector = new LeaderSelector(client, path, this);
// for most cases you will want your instance to requeue when it relinquishes leadership
leaderSelector.autoRequeue();
}
public void start() throws IOException
{
// the selection for this instance doesn't start until the leader selector is started
// leader selection is done in the background so this call to leaderSelector.start() returns immediately
leaderSelector.start();
}
#Override
public void takeLeadership(CuratorFramework arg0) throws Exception {
// we are now the leader. This method should not return until we want to relinquish leadership
final int waitSeconds = (int)(5 * Math.random()) + 1;
System.out.println(name + " is now the leader. Waiting " + waitSeconds + " seconds...");
//System.out.println(name + " has been leader " + leaderCount.getAndIncrement() + " time(s) before.");
try
{
Thread.sleep(TimeUnit.SECONDS.toMillis(waitSeconds));
}
catch ( InterruptedException e )
{
System.err.println(name + " was interrupted.");
Thread.currentThread().interrupt();
}
finally
{
System.out.println(name + " relinquishing leadership.\n");
}
}
#Override
public void close() throws IOException {
leaderSelector.close();
}
}
I have only one zookeeper instance and I am using Zookeeper 3.4.6, curator-framework 4.0.0 and curator-recipes 4.0.0.
when I start the client, it connects to zookeeper and in the log I can see "State change : connected" message.
Then I wait 10s and start leader election which gives me below error repeatedly.
2017-09-06 09:34:22.727 INFO 1228 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Unable to read additional data from server sessionid 0x15e555a719d0000, likely server has closed socket, closing socket connection and attempting reconnect
2017-09-06 09:34:22.830 INFO 1228 --- [c-1-EventThread] o.a.c.f.state.ConnectionStateManager : State change: SUSPENDED
2017-09-06 09:34:23.302 INFO 1228 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2017-09-06 09:34:23.303 INFO 1228 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Socket connection established, initiating session, client: /127.0.0.1:49594, server: localhost/127.0.0.1:2181
2017-09-06 09:34:23.305 INFO 1228 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15e555a719d0000, negotiated timeout = 120000
2017-09-06 09:34:23.305 INFO 1228 --- [c-1-EventThread] o.a.c.f.state.ConnectionStateManager : State change: RECONNECTED
2017-09-06 09:34:23.310 WARN 1228 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session 0x15e555a719d0000 for server localhost/127.0.0.1:2181, unexpected error, closing socket connection and attempting reconnect
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.8.0_131]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[na:1.8.0_131]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[na:1.8.0_131]
at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[na:1.8.0_131]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) ~[na:1.8.0_131]
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:75) ~[zookeeper-3.5.3-beta.jar:3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60]
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:363) ~[zookeeper-3.5.3-beta.jar:3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) ~[zookeeper-3.5.3-beta.jar:3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60]
after some time it started to give me below error message.
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) ~[zookeeper-3.5.3-beta.jar:3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60]
at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:831) [curator-framework-4.0.0.jar:4.0.0]
at org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:623) [curator-framework-4.0.0.jar:4.0.0]
at org.apache.curator.framework.imps.WatcherRemovalFacade.processBackgroundOperation(WatcherRemovalFacade.java:152) [curator-framework-4.0.0.jar:4.0.0]
at org.apache.curator.framework.imps.GetConfigBuilderImpl$2.processResult(GetConfigBuilderImpl.java:222) [curator-framework-4.0.0.jar:4.0.0]
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:590) [zookeeper-3.5.3-beta.jar:3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60]
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:499) [zookeeper-3.5.3-beta.jar:3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60]
2017-09-06 09:34:31.897 INFO 1228 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2017-09-06 09:34:31.898 INFO 1228 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Socket connection established, initiating session, client: /127.0.0.1:49611, server: localhost/127.0.0.1:2181
2017-09-06 09:34:31.899 INFO 1228 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15e555a719d0000, negotiated timeout = 120000
2017-09-06 09:34:31.899 INFO 1228 --- [c-1-EventThread] o.a.c.f.state.ConnectionStateManager : State change: RECONNECTED
2017-09-06 09:34:31.907 WARN 1228 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session 0x15e555a719d0000 for server localhost/127.0.0.1:2181, unexpected error, closing socket connection and attempting reconnect
java.io.IOException: Xid out of order. Got Xid 41 with err -6 expected Xid 40 for a packet with details: clientPath:/leaderSelections serverPath:/leaderSelections finished:false header:: 40,12 replyHeader:: 0,0,-4 request:: '/leaderSelections,F response:: v{}
at org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:892) ~[zookeeper-3.5.3-beta.jar:3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60]
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:101) ~[zookeeper-3.5.3-beta.jar:3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60]
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:363) ~[zookeeper-3.5.3-beta.jar:3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) ~[zookeeper-3.5.3-beta.jar:3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60]
I tried several solution in the internet but non got succeeded. Does anybody know the root cause of this issue.
I have fixed this issue. There was version number mismatch between zookeeper version and curator version. I used curator version 4.0.0 with zookeeper 3.4.6. According to apache curator site
Curator 4.0.0 - compatible with ZooKeeper 3.5.x. I changed my curator version to 2.8.0

How do I run a beam class in dataflow which access google sql instance?

When i run my pipeline from local machine, i can update the table which resides in the cloud Sql instance. But, when i moved this to run using DataflowRunner, the same is failing with the below exception.
To connect from my eclipse, I created the data source config as
.create("com.mysql.jdbc.Driver", "jdbc:mysql://<ip of sql instance > :3306/mydb") .
The same i changed to
.create("com.mysql.jdbc.GoogleDriver", "jdbc:google:mysql://<project-id>:<instance-name>/my-db") while running through the Dataflow runner.
Should i prefix the zone information of the instance to ?
The exception i get when i run this is given below :
Jun 22, 2017 6:53:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2017-06-22T13:23:51.583Z: (840be37ab35d3d0d): Starting 2 workers in us-central1-f...
Jun 22, 2017 6:53:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2017-06-22T13:23:51.634Z: (dabfae1dc9365d10): Executing operation JdbcIO.Read/Create.Values/Read(CreateSource)+JdbcIO.Read/ParDo(Read)+JdbcIO.Read/ParDo(Anonymous)+JdbcIO.Read/GroupByKey/Reify+JdbcIO.Read/GroupByKey/Write
Jun 22, 2017 6:54:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2017-06-22T13:24:44.762Z: (21395b94f8bf7f61): Workers have started successfully.
SEVERE: 2017-06-22T13:25:30.214Z: (3b988386f963503e): java.lang.RuntimeException: org.apache.beam.sdk.util.UserCodeException: java.sql.SQLException: Cannot load JDBC driver class 'com.mysql.jdbc.GoogleDriver'
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:289)
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:261)
at com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:55)
at com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:43)
at com.google.cloud.dataflow.worker.graph.Networks.replaceDirectedNetworkNodes(Networks.java:78)
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.create(MapTaskExecutorFactory.java:152)
at com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.doWork(DataflowWorker.java:272)
at com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:244)
at com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:125)
at com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:105)
at com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:92)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.beam.sdk.util.UserCodeException: java.sql.SQLException: Cannot load JDBC driver class 'com.mysql.jdbc.GoogleDriver'
at org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:36)
at org.apache.beam.sdk.io.jdbc.JdbcIO$Read$ReadFn$auxiliary$M7MKjX9p.invokeSetup(Unknown Source)
at com.google.cloud.dataflow.worker.runners.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.deserializeCopy(DoFnInstanceManagers.java:65)
at com.google.cloud.dataflow.worker.runners.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.peek(DoFnInstanceManagers.java:47)
at com.google.cloud.dataflow.worker.runners.worker.UserParDoFnFactory.create(UserParDoFnFactory.java:100)
at com.google.cloud.dataflow.worker.runners.worker.DefaultParDoFnFactory.create(DefaultParDoFnFactory.java:70)
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.createParDoOperation(MapTaskExecutorFactory.java:365)
at com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:278)
... 14 more
Any help to fix this is really appreciated. This is my first attempt to run a beam pipeline as a dataflow job.
PipelineOptions options = PipelineOptionsFactory.as(DataflowPipelineOptions.class);
((DataflowPipelineOptions) options).setNumWorkers(2);
((DataflowPipelineOptions)options).setProject("xxxxx");
((DataflowPipelineOptions)options).setStagingLocation("gs://xxxx/staging");
((DataflowPipelineOptions)options).setRunner(DataflowRunner.class);
((DataflowPipelineOptions)options).setStreaming(false);
options.setTempLocation("gs://xxxx/tempbucket");
options.setJobName("sqlpipeline");
PCollection<Account> collection = dataflowPipeline.apply(JdbcIO.<Account>read()
.withDataSourceConfiguration(JdbcIO.DataSourceConfiguration
.create("com.mysql.jdbc.GoogleDriver", "jdbc:google:mysql://project-id:testdb/db")
.withUsername("root").withPassword("root"))
.withQuery(
"select account_id,account_parent,account_description,account_type,account_rollup,Custom_Members from account")
.withCoder(AvroCoder.of(Account.class)).withStatementPreparator(new JdbcIO.StatementPreparator() {
public void setParameters(PreparedStatement preparedStatement) throws Exception {
preparedStatement.setFetchSize(1);
preparedStatement.setFetchDirection(ResultSet.FETCH_FORWARD);
}
}).withRowMapper(new JdbcIO.RowMapper<Account>() {
public Account mapRow(ResultSet resultSet) throws Exception {
Account account = new Account();
account.setAccount_id(resultSet.getInt("account_id"));
account.setAccount_parent(resultSet.getInt("account_parent"));
account.setAccount_description(resultSet.getString("account_description"));
account.setAccount_type(resultSet.getString("account_type"));
account.setAccount_rollup("account_rollup");
account.setCustom_Members("Custom_Members");
return account;
}
}));
Have you properly pulled in the com.google.cloud.sql/mysql-socket-factory maven dependency? Looks like you are failing to load the class.
https://cloud.google.com/appengine/docs/standard/java/cloud-sql/#Java_Connect_to_your_database
Hi I think it's better to move on with "com.mysql.jdbc.Driver" because google driver is supporting for app engine deployments
So as it goes this is what my pipeline configurations look alike and it works perfectly fine for me
PCollection < KV < Double, Double >> exchangeRates = p.apply(JdbcIO. < KV < Double, Double >> read()
.withDataSourceConfiguration(JdbcIO.DataSourceConfiguration.create("com.mysql.jdbc.Driver", "jdbc:mysql://ip:3306/dbname?user=root&password=root&useUnicode=true&characterEncoding=UTF-8")
)
.withQuery(
"SELECT PERIOD_YEAR, PERIOD_YEAR FROM SALE")
.withCoder(KvCoder.of(DoubleCoder.of(), DoubleCoder.of()))
.withRowMapper(new JdbcIO.RowMapper < KV < Double, Double >> () {
#Override
public KV<Double, Double> mapRow(java.sql.ResultSet resultSet) throws Exception {
LOG.info(resultSet.getDouble(1)+ "Came");
return KV.of(resultSet.getDouble(1), resultSet.getDouble(2));
}
}));
Hope it will help

akka-http no stack trace or details on error

I got a structure which can basically be summarized as:
outside user makes a rest request to akka-http server
akka-http makes a request(query?) to a (some)data source using asynchttpclient
akka-http transforms the result from asynchttpclient and serves it back to user
At some point I am getting an error from akka which tells me almost nothing. This error happens right after the asynchttpclient returns me some results. (I can infact at this point print the results on the log, they are there parsed from json etc.. but akka had already errored out)
Even in debug logging level I got no decipherable error message from akka or a stacktrace.
only message I got is:
2017-03-24 17:22:55 INFO CompanyRepository:111 - search company with name:"somecompanyname"
2017-03-24 17:22:55 INFO CompanyRepository:73 - [QUERY TIME]: 527ms
[ERROR] [03/24/2017 17:22:55.951] [company-api-system-akka.actor.default-dispatcher-3] [akka.actor.ActorSystemImpl(company-api-system)] Error during processing of request: 'requirement failed'. Completing with 500 Internal Server Error response.
This error message is the only thing I get. Relevant parts of my config:
akka {
loglevel = "DEBUG"
# edit -- tested with sl4jlogger with no change
#loggers = ["akka.event.slf4j.Slf4jLogger"]
#logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
parsing {
max-content-length = 800m
max-chunk-size = 100m
}
server {
server-header = akka-http/${akka.http.version}
idle-timeout = 120 s
request-timeout = 120 s
bind-timeout = 10s
max-connections = 1024
pipelining-limit = 32
verbose-error-messages = on
}
client {
user-agent-header = akka-http/${akka.http.version}
}
host-connection-pool {
max-connections = 4
}
}
akka.http.routing {
verbose-error-messages = on
}
Anyone knows if I can make akka to spit out more details about what/where the error is occurring?
Edit: I realized I do NOT get this same error on resultsets which are smaller in size. <- ignore
Edit 2:
Added akka.loglevel = DEBUG, spits out a lot more noise but still not detail about the actual error.
Converted asynchttpclient to akka quickly to rule out AHC
I already had a wrapper around my query to time it, added some logging there trying to pinpoint when exactly the error is happening.
def queryTimer[ R <: Future[ Any ] ]( block: => R ): R = {
val t0 = System.currentTimeMillis()
val result = block
result.onComplete { maybeResult =>
val t1 = System.currentTimeMillis()
logger.info( "[QUERY TIME]: " + ( t1 - t0 ) + "ms" )
maybeResult match {
case Success(some) =>
logger.info( "successful feature:")
logger.info( FormattedString.prettyPrint(some))
case Failure(someFailure) =>
logger.info( "failed feature:")
logger.debug( FormattedString.prettyPrint(someFailure))
}
}
result
}
resulting log:
2017-03-28 13:19:10 INFO CompanyRepository:111 - search company with name:"some company"
[DEBUG] [03/28/2017 13:19:10.497] [company-api-system-akka.actor.default-dispatcher-2] [EventStream(akka://xca-api-actor-system)] logger log1-Logging$DefaultLogger started
[DEBUG] [03/28/2017 13:19:10.497] [company-api-system-akka.actor.default-dispatcher-2] [EventStream(akka://xca-api-actor-system)] Default Loggers started
[DEBUG] [03/28/2017 13:19:10.613] [company-api-system-akka.actor.default-dispatcher-2] [AkkaSSLConfig(akka://xca-api-actor-system)] Initializing AkkaSSLConfig extension...
[DEBUG] [03/28/2017 13:19:10.613] [company-api-system-akka.actor.default-dispatcher-2] [AkkaSSLConfig(akka://xca-api-actor-system)] buildHostnameVerifier: created hostname verifier: com.typesafe.sslconfig.ssl.DefaultHostnameVerifier#779e2339
[DEBUG] [03/28/2017 13:19:10.633] [xca-api-actor-system-akka.actor.default-dispatcher-3] [akka://xca-api-actor-system/user/pool-master/PoolInterfaceActor-0] (Re-)starting host connection pool to localhost:27474
[DEBUG] [03/28/2017 13:19:10.727] [xca-api-actor-system-akka.actor.default-dispatcher-3] [akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0] Resolving localhost before connecting
[DEBUG] [03/28/2017 13:19:10.740] [xca-api-actor-system-akka.actor.default-dispatcher-4] [akka://xca-api-actor-system/system/IO-DNS] Resolution request for localhost from Actor[akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0#-815754478]
[DEBUG] [03/28/2017 13:19:10.749] [xca-api-actor-system-akka.actor.default-dispatcher-4] [akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0] Attempting connection to [localhost/127.0.0.1:27474]
[DEBUG] [03/28/2017 13:19:10.751] [xca-api-actor-system-akka.actor.default-dispatcher-4] [akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0] Connection established to [localhost:27474]
2017-03-28 13:19:10 INFO CompanyRepository:73 - [QUERY TIME]: 376ms
2017-03-28 13:19:10 INFO CompanyRepository:77 - successful feature:
[ERROR] [03/28/2017 13:19:10.896] [company-api-system-akka.actor.default-dispatcher-7] [akka.actor.ActorSystemImpl(company-api-system)] Error during processing of request: 'requirement failed'. Completing with 500 Internal Server Error response.
2017-03-28 13:19:10 INFO CompanyRepository:78 - SearchResult(List(
( prettyprint output here!!! lots and lots of legit result, json parsed succcesfully into a bunch of case classes)
as you can see my logging format and akkas' are different, the ERROR is coming from akka with do details, while everything looks like working.
Edit 3: logs with sleep in between calls
new query timer function with sleeps
def queryTimer[ R <: Future[ Any ] ]( block: => R ): R = {
val t0 = System.currentTimeMillis()
val result = block
result.onComplete { maybeResult =>
val t1 = System.currentTimeMillis()
logger.info( "[QUERY TIME]: " + ( t1 - t0 ) + "ms" )
maybeResult match {
case Success(some) =>
Thread.sleep(500)
logger.info( "successful feature:")
Thread.sleep(500)
logger.info( FormattedString.prettyPrint(some))
Thread.sleep(500)
logger.info("we are there!")
case Failure(someFailure) =>
logger.info( "failed feature:")
logger.debug( FormattedString.prettyPrint(someFailure))
}
}
result
}
logs with sleeps
[DEBUG] [03/30/2017 11:11:58.629] [xca-api-actor-system-akka.actor.default-dispatcher-7] [akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0] Attempting connection to [localhost/127.0.0.1:27474]
[DEBUG] [03/30/2017 11:11:58.631] [xca-api-actor-system-akka.actor.default-dispatcher-7] [akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0] Connection established to [localhost:27474]
11:11:59.442 [pool-2-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:11:59.496 [pool-1-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:12:00.250 [ForkJoinPool-2-worker-15] INFO c.s.s.r.neo4j.CompanyRepository - [QUERY TIME]: 1880ms
[ERROR] [03/30/2017 11:12:00.265] [company-api-system-akka.actor.default-dispatcher-3] [akka.actor.ActorSystemImpl(company-api-system)] Error during processing of request: 'requirement failed'. Completing with 500 Internal Server Error response.
11:12:00.543 [pool-2-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:12:00.597 [pool-1-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:12:00.752 [ForkJoinPool-2-worker-15] INFO c.s.s.r.neo4j.CompanyRepository - successful feature:
11:12:01.645 [pool-2-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:12:01.697 [pool-1-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:12:01.750 [ForkJoinPool-2-worker-15] INFO c.s.s.r.neo4j.CompanyRepository - SearchResult(List( "lots of legit result here"
11:12:02.281 [ForkJoinPool-2-worker-15] INFO c.s.s.r.neo4j.CompanyRepository - we are there!
Edit 4 and solution!
Apparently the default exception handler does not print a stack trace! overriding the exception handler with a very basic catch all:
implicit def myExceptionHandler: ExceptionHandler =
ExceptionHandler {
case e: Exception => {
logger.info("---------------- exception log start")
logger.error(e.getMessage, e)
logger.error("cause" , e.getCause)
logger.error("cause" , e.getStackTraceString )
logger.info( FormattedString.prettyPrint(e))
logger.info("---------------- exception log end")
Directives.complete("server made a boo boo")
}
}
results in a stack trace that befuddles the sh*t out of me!!
11:42:04.634 [company-api-system-akka.actor.default-dispatcher-2] INFO c.stepweb.scarifgate.CompanyApiApp$ - ---------------- exception log start
11:42:04.640 [company-api-system-akka.actor.default-dispatcher-2] ERROR c.stepweb.scarifgate.CompanyApiApp$ - requirement failed
java.lang.IllegalArgumentException: requirement failed
at scala.Predef$.require(Predef.scala:212) ~[scala-library-2.11.8.jar:na]
at spray.json.BasicFormats$StringJsonFormat$.write(BasicFormats.scala:121) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.BasicFormats$StringJsonFormat$.write(BasicFormats.scala:119) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.ProductFormats$class.productElement2Field(ProductFormats.scala:46) ~[spray-json_2.11-1.3.2.jar:na]
at com.stepweb.scarifgate.services.CompanyService.productElement2Field(CompanyService.scala:14) ~[classes/:na]
at spray.json.ProductFormatsInstances$$anon$3.write(ProductFormatsInstances.scala:73) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.ProductFormatsInstances$$anon$3.write(ProductFormatsInstances.scala:68) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.PimpedAny.toJson(package.scala:39) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.CollectionFormats$$anon$1$$anonfun$write$1.apply(CollectionFormats.scala:26) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.CollectionFormats$$anon$1$$anonfun$write$1.apply(CollectionFormats.scala:26) ~[spray-json_2.11-1.3.2.jar:na]
at scala.collection.immutable.List.map(List.scala:273) ~[scala-library-2.11.8.jar:na]
at spray.json.CollectionFormats$$anon$1.write(CollectionFormats.scala:26) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.CollectionFormats$$anon$1.write(CollectionFormats.scala:25) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.ProductFormats$class.productElement2Field(ProductFormats.scala:46) ~[spray-json_2.11-1.3.2.jar:na]
at com.stepweb.scarifgate.services.CompanyService.productElement2Field(CompanyService.scala:14) ~[classes/:na]
at spray.json.ProductFormatsInstances$$anon$1.write(ProductFormatsInstances.scala:30) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.ProductFormatsInstances$$anon$1.write(ProductFormatsInstances.scala:26) ~[spray-json_2.11-1.3.2.jar:na]
at akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport$$anonfun$sprayJsonMarshaller$1.apply(SprayJsonSupport.scala:62) ~[akka-http-spray-json_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport$$anonfun$sprayJsonMarshaller$1.apply(SprayJsonSupport.scala:62) ~[akka-http-spray-json_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anonfun$compose$1$$anonfun$apply$15.apply(Marshaller.scala:73) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anonfun$compose$1$$anonfun$apply$15.apply(Marshaller.scala:73) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anon$1.apply(Marshaller.scala:92) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.GenericMarshallers$$anonfun$optionMarshaller$1$$anonfun$apply$1.apply(GenericMarshallers.scala:19) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.GenericMarshallers$$anonfun$optionMarshaller$1$$anonfun$apply$1.apply(GenericMarshallers.scala:18) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anon$1.apply(Marshaller.scala:92) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.PredefinedToResponseMarshallers$$anonfun$fromStatusCodeAndHeadersAndValue$1$$anonfun$apply$5.apply(PredefinedToResponseMarshallers.scala:58) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.PredefinedToResponseMarshallers$$anonfun$fromStatusCodeAndHeadersAndValue$1$$anonfun$apply$5.apply(PredefinedToResponseMarshallers.scala:57) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anon$1.apply(Marshaller.scala:92) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anonfun$compose$1$$anonfun$apply$15.apply(Marshaller.scala:73) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anonfun$compose$1$$anonfun$apply$15.apply(Marshaller.scala:73) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anon$1.apply(Marshaller.scala:92) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.ToResponseMarshallable$$anonfun$1$$anonfun$apply$1.apply(ToResponseMarshallable.scala:29) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.ToResponseMarshallable$$anonfun$1$$anonfun$apply$1.apply(ToResponseMarshallable.scala:29) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anon$1.apply(Marshaller.scala:92) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.GenericMarshallers$$anonfun$futureMarshaller$1$$anonfun$apply$3$$anonfun$apply$4.apply(GenericMarshallers.scala:33) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.GenericMarshallers$$anonfun$futureMarshaller$1$$anonfun$apply$3$$anonfun$apply$4.apply(GenericMarshallers.scala:33) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.util.FastFuture$.akka$http$scaladsl$util$FastFuture$$strictTransform$1(FastFuture.scala:41) ~[akka-http-core_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.util.FastFuture$$anonfun$transformWith$extension1$1.apply(FastFuture.scala:51) [akka-http-core_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.util.FastFuture$$anonfun$transformWith$extension1$1.apply(FastFuture.scala:50) [akka-http-core_2.11-10.0.0.jar:10.0.0]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) [scala-library-2.11.8.jar:na]
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) [akka-actor_2.11-2.4.16.jar:na]
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91) [akka-actor_2.11-2.4.16.jar:na]
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) [akka-actor_2.11-2.4.16.jar:na]
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) [akka-actor_2.11-2.4.16.jar:na]
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) [scala-library-2.11.8.jar:na]
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90) [akka-actor_2.11-2.4.16.jar:na]
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39) [akka-actor_2.11-2.4.16.jar:na]
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415) [akka-actor_2.11-2.4.16.jar:na]
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [scala-library-2.11.8.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [scala-library-2.11.8.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [scala-library-2.11.8.jar:na]
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [scala-library-2.11.8.jar:na]
11:42:04.640 [company-api-system-akka.actor.default-dispatcher-2] ERROR c.stepweb.scarifgate.CompanyApiApp$ - cause
11:42:04.641 [company-api-system-akka.actor.default-dispatcher-2] ERROR c.stepweb.scarifgate.CompanyApiApp$ - cause
11:42:04.644 [company-api-system-akka.actor.default-dispatcher-2] INFO c.stepweb.scarifgate.CompanyApiApp$ - java.lang.IllegalArgumentException: requirement failed
11:42:04.644 [company-api-system-akka.actor.default-dispatcher-2] INFO c.stepweb.scarifgate.CompanyApiApp$ - ---------------- exception log end
so... the exception is caused here in spray.json.BasicFormats
implicit object StringJsonFormat extends JsonFormat[String] {
def write(x: String) = {
require(x ne null) // <-----------------------------------
JsString(x)
}
def read(value: JsValue) = value match {
case JsString(x) => x
case x => deserializationError("Expected String as JsString, but got " + x)
}
}
which sort of means one of the strings in this thousands of lines of response is null. Special thanks goes to the laziness of using that "require" without a message. Debugging which string is empty where will be a nightmare but I still think akka should fail in a better way.
akka-http no stack trace or details on error
Well, default akka-http ExceptionHandler doesn't print stack trace and prints only error message or its class name if the message is empty but you can provide custom exception handler that will print anything you want (i.e. stack trace in your example).
Some examples of how to make a custom exception handler are provided at GitHub ExceptionHandlerExamplesSpec.spec
The simplest way in your case seems to be to define your own custom implicit exception handler
import akka.http.scaladsl.model._
import akka.http.scaladsl.server._
import StatusCodes._
import Directives._
implicit def myExceptionHandler: ExceptionHandler =
ExceptionHandler {
case NonFatal(e) =>
logger.error(s"Exception $e at\n${e.getStackTraceString}")
complete(HttpResponse(InternalServerError, entity = "Internal Server Error"))
}
}
Try setting the loggers as well - from your configuration it seems they're not set. Something like:
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
}
Also, consider using akka-slf4j along with their recommended logging backend logback.
This should make akka spit more details.