Milo OPCUA Client times out waiting for secure channel - eclipse

I am developing a service for Apache NiFi which is supposed to be able to Subscribe to a few variables on an OPCUA server. I can't get the service to connect to the OPCUA server. This is the stacktrace:
java.util.concurrent.ExecutionException: UaException: status=Bad_Timeout, message=timed out waiting for secure channel
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at com.hashmapinc.tempus.processors.StandardMiloOPCUAService.createClient(StandardMiloOPCUAService.java:75)
at com.hashmapinc.tempus.processors.StandardMiloOPCUAService.onEnabled(StandardMiloOPCUAService.java:92)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)
at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)
at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70)
at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47)
at org.apache.nifi.controller.service.StandardControllerServiceNode$2.run(StandardControllerServiceNode.java:400)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.eclipse.milo.opcua.stack.core.UaException: timed out waiting for secure channel
at org.eclipse.milo.opcua.stack.client.handlers.UaTcpClientMessageHandler.lambda$handlerAdded$2(UaTcpClientMessageHandler.java:151)
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:581)
at io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:655)
at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:367)
... 1 common frames omitted
So it's timing out while waiting for a secure channel. When I supply some bogus IP address to the service, it just says that the connection timed out.
This is the code I use to create the client:
private OpcUaClient createClient() throws Exception {
EndpointDescription[] endpoints;
endpoints = UaTcpStackClient
.getEndpoints("opc.tcp://theaddress:49320")
.get();
EndpointDescription endpoint = endpoints[0];
OpcUaClientConfig config = OpcUaClientConfig.builder()
.setApplicationName(LocalizedText.english("MinimalClient"))
.setApplicationUri("urn:theurn")
.setCertificate(null)
.setKeyPair(null)
.setEndpoint(endpoint)
.setMaxResponseMessageSize(uint(50000))
.setIdentityProvider(new AnonymousProvider())
.setRequestTimeout(uint(5000))
.build();
return new OpcUaClient(config);
}
Could this be a firewall issue? Does it connect to the server and then fails to get a secure channel? FYI: The server allows anonymous identity and has SecurityPolicy: NONE.

Related

Error during spark submiting job on Yarn cluster from remote host

I try to spark-submit my jar with Spark application to remote Yarn Cluster.
I downloaded files from cluster:
hdfs-site.xml
yarn-site.xml
core-site.xml
Set environment variable HADOOP_CONF_DIR on directory with these files.
Than I do spark-sumbit:
set HADOOP_CONF_DIR=C:\projects\config\0
spark-submit ^
--deploy-mode cluster ^
--principal test#tdomain ^
--keytab "test.keytab" ^
--queue garliq ^
--properties-file "SparkSubmit.conf" ^
--class ru.rosbank.App ^
scala-spark-maven-1.0-SNAPSHOT-jar-with-dependencies.jar
But I get error:
INFO ConfiguredRMFailoverProxyProvider: Failing over to rm1 Exception
in thread "main" java.io.IOException: DestHost:destPort
node1.tdomain:8032 , LocalHost:localPort
RS-AAA11111111/11.23.111.164:0. Failed on local exception:
java.io.IOException: Couldn't set up IO streams:
java.lang.IllegalArgumentException: Server has invalid Kerberos
principal: rm/node1.tdomain#DOMAIN, expecting: rm/11.22.33.155#TDOMAIN
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
at org.apache.hadoop.ipc.Client.call(Client.java:1457)
at org.apache.hadoop.ipc.Client.call(Client.java:1367)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy7.getClusterMetrics(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:271)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy8.getClusterMetrics(Unknown Source)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:605)
at org.apache.spark.deploy.yarn.Client.$anonfun$submitApplication$1(Client.scala:179)
at org.apache.spark.internal.Logging.logInfo(Logging.scala:57)
at org.apache.spark.internal.Logging.logInfo$(Logging.scala:56)
at org.apache.spark.deploy.yarn.Client.logInfo(Client.scala:65)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:179)
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1227)
at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1634)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:951)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1030)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1039)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.io.IOException: Couldn't set up IO streams:
java.lang.IllegalArgumentException: Server has invalid Kerberos
principal: rm/node1.tdomain#DOMAIN, expecting: rm/11.22.33.155#TDOMAIN
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:866)
at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:411)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1572)
at org.apache.hadoop.ipc.Client.call(Client.java:1403)
... 29 more Caused by: java.lang.IllegalArgumentException: Server has invalid Kerberos principal: rm/node1.tdomain#DOMAIN,
expecting: rm/11.22.33.155#TDOMAIN
at org.apache.hadoop.security.SaslRpcClient.getServerPrincipal(SaslRpcClient.java:337)
at org.apache.hadoop.security.SaslRpcClient.createSaslClient(SaslRpcClient.java:234)
at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:160)
at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:390)
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:617)
at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:411)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:804)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:800)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:800)
... 32 more
Problem is here:
Server has invalid Kerberos principal: rm/node1.tdomain#DOMAIN, expecting: rm/11.22.33.155#TDOMAIN
As you can see, Domain on test cluster has value DOMAIN, but have to be TDOMAIN.
Where can I find settings of SErver principal rm/node1.tdomain#DOMAIN? Is it somewhere on cluster? or I have to do additional settings on my local host for launching spark-submit?
You could look through this deployment steps doc from cloudera. You can ignore the spark streaming bit.
You need to pass the keytab files in the —files option so that it may be copied onto the remote spark machine that would then use it to authenticate with the Kerberos server using your principal/service account, if it is reachable.

NiFi Writing to HDFS Error: java.lang.IllegalArgumentException: Can not create a Path from an empty string

I am facing a problem with NiFi writing to HDFS. I am getting an error:
ERROR [Timer-Driven Process Thread-10] o.apache.nifi.processors.hadoop.PutHDFS PutHDFS[id=4af43efa-a8ff-18ac-0000-00002377fba5] Failed to properly initialize Processor. If still scheduled to run, NiFi will attempt to initialize and run the Processor again after the 'Administrative Yield Duration' has elapsed. Failure is due to java.lang.reflect.InvocationTargetException: java.lang.reflect.InvocationTargetException
java.lang.reflect.InvocationTargetException: null
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)
at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)
at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70)
at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47)
at org.apache.nifi.controller.StandardProcessorNode.lambda$initiateStart$1(StandardProcessorNode.java:1364)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: Can not create a Path from an empty string
at org.apache.hadoop.fs.Path.checkPathArg(Path.java:126)
at org.apache.hadoop.fs.Path.<init>(Path.java:134)
at org.apache.nifi.processors.hadoop.AbstractHadoopProcessor.getConfigurationFromResources(AbstractHadoopProcessor.java:225)
at org.apache.nifi.processors.hadoop.AbstractHadoopProcessor.resetHDFSResources(AbstractHadoopProcessor.java:254)
at org.apache.nifi.processors.hadoop.AbstractHadoopProcessor.abstractOnScheduled(AbstractHadoopProcessor.java:205)
... 15 common frames omitted
My HDFS configuration is:
Note: same configuration were applied on PutFile and it worked perfectly (Kafka.topic was not empty)
It seems when PutHDFS processor is trying to save the file looking for kafka.topic attribute associated with the flowfile but attribute not having any value to it.
Make sure you are having kafka.topic attribute having some value associate to it, we can use UpdataAttribute processor before PutHDFS to add the attribute.

can't load jdbc driver class PostgreSQL + NIFI

Start working with Nifi and this is my first exercise.
So trying to put a csv file in a Postgres table. I defined my data base driver as shown in the picture.
The error is:
can't load jdbc driver class
in my log file I have this message:
ERROR [StandardProcessScheduler Thread-1] o.a.n.c.s.StandardControllerServiceNode DBCPConnectionPool[id=c25f8f91-0161-1000-a496-8910832bdbd8] F$
org.apache.nifi.reporting.InitializationException: Can't load Database Driver
at org.apache.nifi.dbcp.DBCPConnectionPool.getDriverClassLoader(DBCPConnectionPool.java:249)
at org.apache.nifi.dbcp.DBCPConnectionPool.onConfigured(DBCPConnectionPool.java:198)
at sun.reflect.GeneratedMethodAccessor437.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137)
at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125)
at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70)
at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47)
at org.apache.nifi.controller.service.StandardControllerServiceNode$2.run(StandardControllerServiceNode.java:409)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: org.postgresql.Driver
The postgres jdbc driver doesn't come packaged with nifi. You must
download the driver (jar) e.g. postgresql-42.2.24.jar
place it in the nifi/lib folder
restart nifi
Open your DBCPConnectionPool controller service properties
Set Database Driver Class Name to org.postgresql.Driver
Resources
https://jdbc.postgresql.org/download.html
https://docs.oracle.com/cd/E19509-01/820-3497/agqka/index.html
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.14.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html
The asker found the root cause:
I insert a return in the line after the jdbc name

put audio file in kafka with spring cloud stream

i'm tryning to put an audio file with json in kafka,
her is the code
producer Code
In the consumer i'm trying to get my file like this Consumer code
the error :
org.springframework.messaging.MessagingException: Exception thrown while invoking com.sofrecom.service.VoiceCampaignCreator#process[1 args]; nested exception is java.lang.NullPointerException
at org.springframework.cloud.stream.binding.StreamListenerMessageHandler.handleRequestMessage(StreamListenerMessageHandler.java:56)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:109)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
at org.springframework.cloud.stream.binding.DispatchingStreamListenerMessageHandler$ConditionalStreamListenerHandler.handleMessage(DispatchingStreamListenerMessageHandler.java:122)
at org.springframework.cloud.stream.binding.DispatchingStreamListenerMessageHandler.handleRequestMessage(DispatchingStreamListenerMessageHandler.java:75)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:109)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:148)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:121)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:89)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:423)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:373)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:105)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutput(AbstractMessageProducingHandler.java:292)
at org.springframework.integration.handler.AbstractMessageProducingHandler.produceOutput(AbstractMessageProducingHandler.java:212)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutputs(AbstractMessageProducingHandler.java:129)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:115)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
at org.springframework.integration.channel.FixedSubscriberChannel.send(FixedSubscriberChannel.java:70)
at org.springframework.integration.channel.FixedSubscriberChannel.send(FixedSubscriberChannel.java:64)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:105)
at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:171)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.access$000(KafkaMessageDrivenChannelAdapter.java:54)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:288)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:279)
at org.springframework.kafka.listener.adapter.RetryingAcknowledgingMessageListenerAdapter$1.doWithRetry(RetryingAcknowledgingMessageListenerAdapter.java:77)
at org.springframework.kafka.listener.adapter.RetryingAcknowledgingMessageListenerAdapter$1.doWithRetry(RetryingAcknowledgingMessageListenerAdapter.java:72)
at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:286)
at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:179)
at org.springframework.kafka.listener.adapter.RetryingAcknowledgingMessageListenerAdapter.onMessage(RetryingAcknowledgingMessageListenerAdapter.java:72)
at org.springframework.kafka.listener.adapter.RetryingAcknowledgingMessageListenerAdapter.onMessage(RetryingAcknowledgingMessageListenerAdapter.java:39)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:771)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:715)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.access$2600(KafkaMessageListenerContainer.java:231)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer$ListenerInvoker.run(KafkaMessageListenerContainer.java:1004)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException: null
at org.apache.tomcat.util.http.fileupload.disk.DiskFileItem.getSize(DiskFileItem.java:267)
at org.apache.catalina.core.ApplicationPart.getSize(ApplicationPart.java:110)
at org.springframework.web.multipart.support.StandardMultipartHttpServletRequest$StandardMultipartFile.getSize(StandardMultipartHttpServletRequest.java:287)
at com.sofrecom.service.VoiceCampaignCreator.process(VoiceCampaignCreator.java:44)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:180)
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:112)
at org.springframework.cloud.stream.binding.StreamListenerMessageHandler.handleRequestMessage(StreamListenerMessageHandler.java:48)
... 42 common frames omitted
any help please !
The problem seems pretty obvious...
Caused by: java.lang.NullPointerException: null at org.apache.tomcat.util.http.fileupload.disk.DiskFileItem.getSize(DiskFileItem.java:267) at org.apache.catalina.core.ApplicationPart.getSize(ApplicationPart.java:110) at org.springframework.web.multipart.support.StandardMultipartHttpServletRequest$StandardMultipartFile.getSize(StandardMultipartHttpServletRequest.java:287) at com.sofrecom.service.VoiceCampaignCreator.process(VoiceCampaignCreator.java:44)
It looks like you are trying to decode a web request outside of a web environment. You need to decode the multipart before sending the data to Kafka.
EDIT
In DiskFileItem...
private transient DeferredFileOutputStream dfos;
... dfos is transient - so it won't get serialized (obviously - because it's a Stream).

How to programatically submit a spark application in yarn-client mode?

I have a simple spark job which replaces spaces with commas in a given input file.
When this job is submitted locally (Using IDE and executing the built jar) it completes successfully and when the master is set to "yarn-client" the job hangs for very long time and throws the following exception.
We have a usecase where we want to submit the job programatically rather than building a jar and submitting it through spark-submit.
Spark version : 1.6.1
Hadoop version : 2.7.1
and i got all the spark, yarn and hadoop dependencies in my pom.
Job failed due to following exception
java.net.ConnectException: Call From spark.node123.com/192.168.2.1 to 0.0.0.0:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.GeneratedConstructorAccessor13.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
at org.apache.hadoop.ipc.Client.call(Client.java:1480)
at org.apache.hadoop.ipc.Client.call(Client.java:1407)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy10.getClusterMetrics(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:152)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy11.getClusterMetrics(Unknown Source)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:246)
at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:129)
at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:129)
at org.apache.spark.Logging$class.logInfo(Logging.scala:58)
at org.apache.spark.deploy.yarn.Client.logInfo(Client.scala:62)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:128)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at tardis.platform.TardisContext$.apply(TardisContext.scala:20)
at tardis.common.plugins.Heartbeat.isAbleTocreateContext(Heartbeat.scala:45)
at tardis.common.plugins.Heartbeat.performAction(Heartbeat.scala:33)
at tardis.core.scheduler.jobs.PluginExecutorJob.execute(PluginExecutorJob.scala:40)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
at org.apache.hadoop.ipc.Client.call(Client.java:1446)
... 25 more
I had to add the hadoop and yarn configurations to successfully submit the application in yarn-client mode.
You can not remotely submit your spark job in client mode since your computer have to run the driver program itself which require a lot of connection. If you insist using this method, you have to config your firewall to allow some port to connect to the cluster. Using cluster mode or submit it from master node are much less painful.