MyBatisCursorItemReader error when using multithreads - spring-batch

I'm using spring batch (version 4.2.1) with MyBatisCursorItemReader to load data from MySql. To improve the performance, I'm trying to use multithread by ThreadPoolTaskExecutor as the thread pool. When there is a single thread is ok but when set the thread pool large than 1 thread, the MyBatisCursorItemReader throw exception as below:
org.apache.ibatis.executor.result.ResultMapException: Error attempting to get column 'xxx' from result set. Cause: java.lang.NullPointerException
at org.apache.ibatis.type.BaseTypeHandler.getResult(BaseTypeHandler.java:83)
at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.applyAutomaticMappings(DefaultResultSetHandler.java:521)
at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.getRowValue(DefaultResultSetHandler.java:402)
at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.handleRowValuesForSimpleResultMap(DefaultResultSetHandler.java:354)
at org.apache.ibatis.executor.resultset.DefaultResultSetHandler.handleRowValues(DefaultResultSetHandler.java:328)
at org.apache.ibatis.cursor.defaults.DefaultCursor.fetchNextObjectFromDatabase(DefaultCursor.java:141)
at org.apache.ibatis.cursor.defaults.DefaultCursor.fetchNextUsingRowBound(DefaultCursor.java:125)
at org.apache.ibatis.cursor.defaults.DefaultCursor$CursorIterator.hasNext(DefaultCursor.java:197)
at org.mybatis.spring.batch.MyBatisCursorItemReader.doRead(MyBatisCursorItemReader.java:55)
at org.springframework.batch.item.support.AbstractItemCountingItemStreamItemReader.read(AbstractItemCountingItemStreamItemReader.java:93)
at org.springframework.batch.core.step.item.SimpleChunkProvider.doRead(SimpleChunkProvider.java:99)
at org.springframework.batch.core.step.item.SimpleChunkProvider.read(SimpleChunkProvider.java:180)
at org.springframework.batch.core.step.item.SimpleChunkProvider$1.doInIteration(SimpleChunkProvider.java:126)
at org.springframework.batch.repeat.support.RepeatTemplate.getNextResult(RepeatTemplate.java:375)
at org.springframework.batch.repeat.support.RepeatTemplate.executeInternal(RepeatTemplate.java:215)
at org.springframework.batch.repeat.support.RepeatTemplate.iterate(RepeatTemplate.java:145)
at org.springframework.batch.core.step.item.SimpleChunkProvider.provide(SimpleChunkProvider.java:118)
at org.springframework.batch.core.step.item.ChunkOrientedTasklet.execute(ChunkOrientedTasklet.java:71)
at org.springframework.batch.core.step.tasklet.TaskletStep$ChunkTransactionCallback.doInTransaction(TaskletStep.java:407)
at org.springframework.batch.core.step.tasklet.TaskletStep$ChunkTransactionCallback.doInTransaction(TaskletStep.java:331)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:140)
at org.springframework.batch.core.step.tasklet.TaskletStep$2.doInChunkContext(TaskletStep.java:273)
at org.springframework.batch.core.scope.context.StepContextRepeatCallback.doInIteration(StepContextRepeatCallback.java:82)
at org.springframework.batch.repeat.support.TaskExecutorRepeatTemplate$ExecutingRunnable.run(TaskExecutorRepeatTemplate.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException: null
at com.mysql.cj.jdbc.result.ResultSetImpl.findColumn(ResultSetImpl.java:545)
at com.mysql.cj.jdbc.result.ResultSetImpl.getTimestamp(ResultSetImpl.java:948)
at com.zaxxer.hikari.pool.HikariProxyResultSet.getTimestamp(HikariProxyResultSet.java)
at org.apache.ibatis.type.DateTypeHandler.getNullableResult(DateTypeHandler.java:39)
at org.apache.ibatis.type.DateTypeHandler.getNullableResult(DateTypeHandler.java:28)
at org.apache.ibatis.type.BaseTypeHandler.getResult(BaseTypeHandler.java:81)
Is there anything wrong or the MyBatisCursorItemReader not support multithread?

Related

WSO2 - Error while Calling Secured Proxy Service

I'm a Beginner in WSO2
I Passed the User name and password for the proxy user and also add the time stamp in the header using SOAP UI
Can someone help to resolve the error I'm facing,
[2022-12-20 12:38:25,793] ERROR {org.apache.axis2.engine.AxisEngine} - System error org.apache.axis2.AxisFault: System error
at org.wso2.carbon.security.pox.POXSecurityHandler.invoke(POXSecurityHandler.java:337)
at org.apache.axis2.engine.Phase.invokeHandler(Phase.java:340)
at org.apache.axis2.engine.Phase.invoke(Phase.java:313)
at org.apache.axis2.engine.AxisEngine.invoke(AxisEngine.java:261)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:167)
at org.apache.synapse.transport.passthru.ServerWorker.processEntityEnclosingRequest(ServerWorker.java:459)
at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:182)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.axiom.soap.SOAPProcessingException: Hello is not supported here. Envelope can not have elements other than Header and Body.
at org.apache.axiom.soap.impl.builder.StAXSOAPModelBuilder.constructNode(StAXSOAPModelBuilder.java:392)
at org.apache.axiom.soap.impl.builder.StAXSOAPModelBuilder.createOMElement(StAXSOAPModelBuilder.java:265)
at org.apache.axiom.soap.impl.builder.StAXSOAPModelBuilder.createNextOMElement(StAXSOAPModelBuilder.java:234)
at org.apache.axiom.om.impl.builder.StAXOMBuilder.next(StAXOMBuilder.java:249)
at org.apache.axiom.om.impl.llom.OMSerializableImpl.build(OMSerializableImpl.java:78)
at org.apache.axiom.om.impl.llom.OMElementImpl.build(OMElementImpl.java:722)
at org.apache.axiom.om.impl.llom.OMElementImpl.buildWithAttachments(OMElementImpl.java:1086)
at org.wso2.carbon.security.pox.POXSecurityHandler.setAuthHeaders(POXSecurityHandler.java:371)
at org.wso2.carbon.security.pox.POXSecurityHandler.invoke(POXSecurityHandler.java:292)
... 10 more
[2022-12-20 12:38:25,850] ERROR {org.apache.synapse.transport.passthru.ServerWorker} - Error processing POST request for : /services/WebServer_Proxy.WebServer_ProxyHttpsSoap11Endpoint org.apache.axis2.AxisFault: System error
at org.wso2.carbon.security.pox.POXSecurityHandler.invoke(POXSecurityHandler.java:337)
at org.apache.axis2.engine.Phase.invokeHandler(Phase.java:340)
at org.apache.axis2.engine.Phase.invoke(Phase.java:313)
at org.apache.axis2.engine.AxisEngine.invoke(AxisEngine.java:261)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:167)
at org.apache.synapse.transport.passthru.ServerWorker.processEntityEnclosingRequest(ServerWorker.java:459)
at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:182)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.axiom.soap.SOAPProcessingException: Hello is not supported here. Envelope can not have elements other than Header and Body.
at org.apache.axiom.soap.impl.builder.StAXSOAPModelBuilder.constructNode(StAXSOAPModelBuilder.java:392)
at org.apache.axiom.soap.impl.builder.StAXSOAPModelBuilder.createOMElement(StAXSOAPModelBuilder.java:265)
at org.apache.axiom.soap.impl.builder.StAXSOAPModelBuilder.createNextOMElement(StAXSOAPModelBuilder.java:234)
at org.apache.axiom.om.impl.builder.StAXOMBuilder.next(StAXOMBuilder.java:249)
at org.apache.axiom.om.impl.llom.OMSerializableImpl.build(OMSerializableImpl.java:78)
at org.apache.axiom.om.impl.llom.OMElementImpl.build(OMElementImpl.java:722)
at org.apache.axiom.om.impl.llom.OMElementImpl.buildWithAttachments(OMElementImpl.java:1086)
at org.wso2.carbon.security.pox.POXSecurityHandler.setAuthHeaders(POXSecurityHandler.java:371)
at org.wso2.carbon.security.pox.POXSecurityHandler.invoke(POXSecurityHandler.java:292)
10 more

Spark fails to read from Elasticsearch/Opensearch. Invalid map received dynamic_date_formats

Hi I'm trying using scala 2.11.12, spark 2.3.0 and elasticsearch-spark-20 7.7.0 to read from an OpenSearch 1.3.4 Index with the following code:
spark.read.format("org.elasticsearch.spark.sql")
.load("myIndex")
.filter('Timestamp === lit(dateToRead))
But I get this error
22/08/17 15:30:42 ERROR EventManager$: Unexpected error retrieving offsets. Bailing out...
Exception in thread "main" org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: invalid map received dynamic_date_formats=[yyyy-MM-dd HH:mm:ss||yyyy-MM-dd'T'HH:mm:ss.SSS||yyyy-MM-dd||yyyy-MM-dd'T'HH||yyyy-MM-dd'T'HH:mm]
at org.elasticsearch.hadoop.serialization.dto.mapping.FieldParser.parseField(FieldParser.java:146)
at org.elasticsearch.hadoop.serialization.dto.mapping.FieldParser.parseMapping(FieldParser.java:88)
at org.elasticsearch.hadoop.serialization.dto.mapping.FieldParser.parseIndexMappings(FieldParser.java:69)
at org.elasticsearch.hadoop.serialization.dto.mapping.FieldParser.parseMappings(FieldParser.java:40)
at org.elasticsearch.hadoop.rest.RestClient.getMappings(RestClient.java:321)
at org.elasticsearch.hadoop.rest.RestClient.getMappings(RestClient.java:307)
at org.elasticsearch.hadoop.rest.RestRepository.getMappings(RestRepository.java:293)
at org.elasticsearch.spark.sql.SchemaUtils$.discoverMappingAndGeoFields(SchemaUtils.scala:103)
at org.elasticsearch.spark.sql.SchemaUtils$.discoverMapping(SchemaUtils.scala:91)
at org.elasticsearch.spark.sql.ElasticsearchRelation.lazySchema$lzycompute(DefaultSource.scala:229)
at org.elasticsearch.spark.sql.ElasticsearchRelation.lazySchema(DefaultSource.scala:229)
at org.elasticsearch.spark.sql.ElasticsearchRelation$$anonfun$schema$1.apply(DefaultSource.scala:233)
at org.elasticsearch.spark.sql.ElasticsearchRelation$$anonfun$schema$1.apply(DefaultSource.scala:233)
at scala.Option.getOrElse(Option.scala:121)
at org.elasticsearch.spark.sql.ElasticsearchRelation.schema(DefaultSource.scala:233)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:431)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:174)
at com.MyCalass$$anonfun$myMethod2$1.apply(MyCalass.scala:130)
at com.MyCalass$$anonfun$myMethod2$1.apply(MyCalass.scala:126)
at scala.util.Try$.apply(Try.scala:192)
at com.MyCalass$.myMethod2(MyCalass.scala:126)
at com.MyCalass$.myMethod(MyCalass.scala:55)
at com.MyApp$.MyApp$$myMethod(MyApp.scala:107)
at com.MyApp$$anonfun$main$2.apply(MyApp.scala:86)
at com.MyApp$$anonfun$main$2.apply(MyApp.scala:76)
at scala.Option.fold(Option.scala:158)
at com.MyApp$.main(MyApp.scala:76)
at com.MyApp.main(MyApp.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Command exiting with ret '1'
I've set the dynamic date mapping in opensearch. And also I am able to write to the index with the correct mapping, but when I try to read, it fails.
I found out the problem, basically The elasticsearch connector is not working properly and it tries to use ES 1.3.4 instead of Opensearch 1.3.4, to solve this problem add compatibility.override_main_response_version : true to your opensearch.yml file.

Spark 3 stream job fails with Cannot run program "chmod"

Spark 3.0 on Kubernetes reading data from Kafka and pushing data out using via 3rd party Segment IO REST API.
I am facing below error while running an Spark stream job
Caused by: java.io.IOException: Cannot run program "chmod": error=11, Resource temporarily unavailable
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:938)
at org.apache.hadoop.util.Shell.run(Shell.java:901)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1213)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1307)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1289)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:865)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:252)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:232)
at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:331)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:320)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:351)
at org.apache.hadoop.fs.FileSystem.primitiveCreate(FileSystem.java:1228)
at org.apache.hadoop.fs.DelegateToFileSystem.createInternal(DelegateToFileSystem.java:100)
at org.apache.hadoop.fs.ChecksumFs$ChecksumFSOutputSummer.<init>(ChecksumFs.java:353)
at org.apache.hadoop.fs.ChecksumFs.createInternal(ChecksumFs.java:400)
at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:605)
at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:696)
at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:692)
at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
at org.apache.hadoop.fs.FileContext.create(FileContext.java:698)
at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.createTempFile(CheckpointFileManager.scala:310)
at org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameBasedFSDataOutputStream.<init>(CheckpointFileManager.scala:133)
at org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameBasedFSDataOutputStream.<init>(CheckpointFileManager.scala:136)
at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.createAtomic(CheckpointFileManager.scala:316)
at org.apache.spark.sql.execution.streaming.HDFSMetadataLog.writeBatchToFile(HDFSMetadataLog.scala:131)
at org.apache.spark.sql.execution.streaming.HDFSMetadataLog.$anonfun$add$3(HDFSMetadataLog.scala:120)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.execution.streaming.HDFSMetadataLog.add(HDFSMetadataLog.scala:118)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$17(MicroBatchExecution.scala:588)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.withProgressLocked(MicroBatchExecution.scala:598)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:585)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:223)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:352)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:350)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:69)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:191)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:57)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:185)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:334)
... 1 more
Caused by: java.io.IOException: error=11, Resource temporarily unavailable
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:247)
at java.lang.ProcessImpl.start(ProcessImpl.java:134)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
Check your PATH Environment Variable.
(Maybe you override it to add some spark/kafka jars to path?)

io.grpc frame size exceeds maximum when pull google PubSub message

Does anybody know how to increase io.grpc maxMessageSize when using Google PubSub service?
Now when I pull a large message I sometimes get this exception:
Exception in thread "main" com.google.cloud.pubsub.PubSubException: io.grpc.StatusRuntimeException: INTERNAL: Exception deframing message
at com.google.cloud.pubsub.spi.DefaultPubSubRpc$1.apply(DefaultPubSubRpc.java:193)
at com.google.cloud.pubsub.spi.DefaultPubSubRpc$1.apply(DefaultPubSubRpc.java:187)
at com.google.common.util.concurrent.Futures$CatchingFuture.doFallback(Futures.java:842)
at com.google.common.util.concurrent.Futures$CatchingFuture.doFallback(Futures.java:834)
at com.google.common.util.concurrent.Futures$AbstractCatchingFuture.run(Futures.java:789)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:456)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:817)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:753)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:634)
at com.google.api.gax.grpc.RetryingCallable$RetryingResultFuture.setException(RetryingCallable.java:202)
at com.google.api.gax.grpc.RetryingCallable$Retryer.onFailure(RetryingCallable.java:147)
at com.google.common.util.concurrent.Futures$6.run(Futures.java:1764)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:456)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:817)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:753)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:634)
at com.google.api.gax.grpc.ExceptionTransformingCallable$ExceptionTransformingFuture.onFailure(ExceptionTransformingCallable.java:113)
at com.google.common.util.concurrent.Futures$6.run(Futures.java:1764)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:456)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:817)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:753)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:634)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:466)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:442)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:481)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:398)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:513)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:52)
at io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.google.api.gax.grpc.ApiException: io.grpc.StatusRuntimeException: INTERNAL: Exception deframing message
... 20 more
Caused by: io.grpc.StatusRuntimeException: INTERNAL: Exception deframing message
at io.grpc.Status.asRuntimeException(Status.java:545)
... 13 more
Caused by: io.grpc.StatusRuntimeException: INTERNAL: Frame size 5454271 exceeds maximum: 4194304. If this is normal, increase the maxMessageSize in the channel/server builder
at io.grpc.Status.asRuntimeException(Status.java:536)
at io.grpc.internal.MessageDeframer.processHeader(MessageDeframer.java:338)
at io.grpc.internal.MessageDeframer.deliver(MessageDeframer.java:240)
at io.grpc.internal.MessageDeframer.deframe(MessageDeframer.java:176)
at io.grpc.internal.AbstractStream.deframe(AbstractStream.java:276)
at io.grpc.internal.AbstractClientStream.inboundDataReceived(AbstractClientStream.java:150)
at io.grpc.internal.Http2ClientStream.transportDataReceived(Http2ClientStream.java:137)
at io.grpc.netty.NettyClientStream.transportDataReceived(NettyClientStream.java:180)
at io.grpc.netty.NettyClientHandler.onDataRead(NettyClientHandler.java:251)
at io.grpc.netty.NettyClientHandler.access$700(NettyClientHandler.java:92)
at io.grpc.netty.NettyClientHandler$FrameListener.onDataRead(NettyClientHandler.java:588)
at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onDataRead(DefaultHttp2ConnectionDecoder.java:256)
at io.netty.handler.codec.http2.Http2InboundFrameLogger$1.onDataRead(Http2InboundFrameLogger.java:48)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readDataFrame(DefaultHttp2FrameReader.java:410)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.processPayloadState(DefaultHttp2FrameReader.java:244)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readFrame(DefaultHttp2FrameReader.java:155)
at io.netty.handler.codec.http2.Http2InboundFrameLogger.readFrame(Http2InboundFrameLogger.java:41)
at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder.decodeFrame(DefaultHttp2ConnectionDecoder.java:113)
at io.netty.handler.codec.http2.Http2ConnectionHandler$FrameDecoder.decode(Http2ConnectionHandler.java:333)
at io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:393)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:350)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1066)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:900)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:350)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:571)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:512)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:426)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:398)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:877)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
... 1 more
The app is written in java, use http://googlecloudplatform.github.io/google-cloud-java/0.6.0/apidocs/index.html?com/google/cloud/pubsub/PubSub.html and run in Google Compute Engine instance.
You can configure the Channel via something like NettyChannelBuilder.maxMessageSize().
In the grpc v1.1 you will be able to use ManagedChannelBuilder.maxInboundMessageSize() and avoid using NettyChannelBuilder directly. You will also be able to configure it on a per-call basis with withMaxInboundMessageSize() on CallOptions and AbstractStub.

Actor creation error in akka

trying to create an actor in akka and call its receive function. I keep getting this error.
[ERROR] [10/26/2013 18:53:29.313] [messagespreading-akka.actor.default-dispatcher-4] [akka://messagespreading/user/$a] error while processing Create(-1187846526)
70ec3d6a-184d-403c-8166-04aec76200c9akka.actor.ActorInitializationException: exception during creation
at akka.actor.ActorInitializationException$.apply(Actor.scala:169)
at akka.actor.ActorCell.create(ActorCell.scala:496)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:351)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:256)
at akka.dispatch.Mailbox.run(Mailbox.scala:211)
at akka.dispatch.ForkJoinExecutorConfigurator$MailboxExecutionTask.exec(AbstractDispatcher.scala:502)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.IllegalArgumentException: n must be positive
at java.util.Random.nextInt(Random.java:250)
at scala.util.Random.nextInt(Random.scala:65)
at NetworkBuilder.<init>(pastry.scala:431)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at java.lang.Class.newInstance0(Class.java:357)
at java.lang.Class.newInstance(Class.java:310)
at akka.util.Reflect$.instantiate(Reflect.scala:39)
at akka.actor.FromClassCreator.apply(Props.scala:187)
at akka.actor.FromClassCreator.apply(Props.scala:186)
at akka.actor.ActorCell.newActor(ActorCell.scala:461)
at akka.actor.ActorCell.create(ActorCell.scala:479)
... 8 more
CODE TO CREATE AND CALL ACTOR RECEIVE.
val system = ActorSystem("messagespreading")
var NetworkBuilderObj:ActorRef= system.actorOf(Props[NetworkBuilder])
NetworkBuilderObj ! test
When you get an exception occurring in third-party code, you need to look to see if there is a Caused by. In this case it says
Caused by: java.lang.IllegalArgumentException: n must be positive
at java.util.Random.nextInt(Random.java:250)
at scala.util.Random.nextInt(Random.scala:65)
at NetworkBuilder.(pastry.scala:431)
so there is the error - it has nothing to do with the code you posted per se.