I am trying to deploy my confluent Kafka Connect for S3 in a distributed mode. But I am encountering the following error :-
(org.eclipse.jetty.server.HttpChannel) [qtp1620643420-22]
java.lang.AbstractMethodError: javax.ws.rs.core.UriBuilder.uri(Ljava/lang/String;)Ljavax/ws/rs/core/UriBuilder;
at javax.ws.rs.core.UriBuilder.fromUri(UriBuilder.java:96)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:275)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:852)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:544)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1581)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1307)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:482)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1549)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1204)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.Server.handle(Server.java:494)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:374)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:268)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:135)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:782)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:918)
at java.lang.Thread.run(Thread.java:748)
I can see the following version of javax.ws.rs-api-2.1.1.jar available in lib folder still it does not solve the issue. I tried importing glassfish jars but that didn't helped too.
Not sure what is the issue has anyone faced this issue can help me ?
Version which I am using
Confluent Kafka S3 Connect version - 5.5.1
Related
I am evaluating Confluent Kafka S2 Source Connector and stuck with the issues with following stacktrace:
[2020-12-22 15:27:41,636] ERROR WorkerConnector{id=s3-source-connector} Error while starting connector (org.apache.kafka.connect.runtime.WorkerConnect
or)
org.apache.kafka.connect.errors.ConnectException: Failed to get list of folders from S3 bucket - kafka-connect for key path - topics/ and delimiter - /
at io.confluent.connect.s3.source.S3Storage.listFolders(S3Storage.java:286)
at io.confluent.connect.s3.source.S3Storage.getPartitions(S3Storage.java:98)
at io.confluent.connect.storage.partitioner.TimeBasedPartitioner.getPartitions(TimeBasedPartitioner.java:50)
at io.confluent.connect.cloud.storage.source.StorageSourceConnector.doStart(StorageSourceConnector.java:77)
at io.confluent.connect.cloud.storage.source.StorageSourceConnector.start(StorageSourceConnector.java:69)
at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:111)
at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:136)
at org.apache.kafka.connect.runtime.WorkerConnector.transitionTo(WorkerConnector.java:196)
at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:242)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:908)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1300(DistributedHerder.java:110)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:924)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:920)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.amazonaws.SdkClientException: Couldn't initialize a SAX driver to create an XMLReader
at com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.<init>(XmlResponsesSaxParser.java:123)
at com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsV2Unmarshaller.unmarshall(Unmarshallers.java:127)
at com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsV2Unmarshaller.unmarshall(Unmarshallers.java:117)
at com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:62)
at com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:31)
at com.amazonaws.http.response.AwsResponseHandlerAdapter.handle(AwsResponseHandlerAdapter.java:69)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleResponse(AmazonHttpClient.java:1714)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleSuccessResponse(AmazonHttpClient.java:1434)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1356)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1139)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:796)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:764)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:738)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:698)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:680)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:544)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:524)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5052)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4998)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4992)
at com.amazonaws.services.s3.AmazonS3Client.listObjectsV2(AmazonS3Client.java:938)
at io.confluent.connect.s3.source.S3Storage.listFolders(S3Storage.java:283)
... 16 more
Caused by: org.xml.sax.SAXException: SAX2 driver class org.apache.xerces.parsers.SAXParser not found
java.lang.ClassNotFoundException: org.apache.xerces.parsers.SAXParser
at org.xml.sax.helpers.XMLReaderFactory.loadClass(XMLReaderFactory.java:230)
at org.xml.sax.helpers.XMLReaderFactory.createXMLReader(XMLReaderFactory.java:191)
at com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.<init>(XmlResponsesSaxParser.java:120)
... 37 more
Caused by: java.lang.ClassNotFoundException: org.apache.xerces.parsers.SAXParser
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.xml.sax.helpers.NewInstance.newInstance(NewInstance.java:82)
at org.xml.sax.helpers.XMLReaderFactory.loadClass(XMLReaderFactory.java:228)
... 39 more
Connector config:
{
"name": "source-connector",
"config": {
"connector.class":"io.confluent.connect.s3.source.S3SourceConnector",
"s3.bucket.name":"bucket-test",
"s3.region":"us-west-2",
"tasks.max":"1",
"topics":"migration-topic",
"topics.dir":"topics/events",
"format.class":"io.confluent.connect.s3.format.json.JsonFormat",
"behavior.on.error": "log",
"partitioner.class":"io.confluent.connect.storage.partitioner.TimeBasedPartitioner",
"path.format":"'date'=YYYY-MM-dd/'hour'=HH",
"key.converter":"com.pandadoc.kafka.connect.msgpack.converter.MessagePackConverter",
"key.converter.schemas.enable":"false",
"value.converter":"com.pandadoc.kafka.connect.msgpack.converter.MessagePackConverter",
"value.converter.schemas.enable":"false",
"errors.tolerance": "all",
"errors.deadletterqueue.topic.name": "kafka-connect-dead-letter-queue",
"errors.deadletterqueue.context.headers.enable": true,
"confluent.license":"",
"confluent.topic.bootstrap.servers":"localhost:9092",
"confluent.topic.replication.factor":"3"
}
}
Versions:
[2020-12-22 15:27:41,640] INFO Kafka version: 2.2.2-cp3 (org.apache.kafka.common.utils.AppInfoParser)
[2020-12-22 15:27:41,640] INFO Kafka commitId: 602b2e2e105b4d34 (org.apache.kafka.common.utils.AppInfoParser)
It could be a JDK bug: https://bugs.openjdk.java.net/browse/JDK-8015099.
It has been fixed in JDK 9+.
Confluent docker image confluentinc/cp-kafka-connect:5.2.4 uses JDK8:
openjdk version "1.8.0_172"
OpenJDK Runtime Environment (Zulu 8.30.0.1-linux64) (build 1.8.0_172-b01)
OpenJDK 64-Bit Server VM (Zulu 8.30.0.1-linux64) (build 25.172-b01, mixed mode)
Any other ideas on what could be wrong?
I've sorted the issue out 😅
It turned out the JDK bug that caused the kind of behavior.
There is an interoperability table for Kafka Connect version and Kafka here hence there are two options:
Tweak docker Kafka Connect image by installing JDK9+
Bump up Kafka Connect to 6.x (if Kafka version allows) that uses JDK11.
I am trying Kafka with Postgres Sink using JDBC-sink connector.
Exception:
INFO Unable to connect to database on attempt 1/3. Will retry in 10000 ms. (io.confluent.connect.jdbc.util.CachedConnectionProvider:91)
java.sql.SQLException: No suitable driver found for jdbc:postgresql://localhost:5432/casb
at java.sql.DriverManager.getConnection(DriverManager.java:689)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at io.confluent.connect.jdbc.util.CachedConnectionProvider.newConnection(CachedConnectionProvider.java:85)
at io.confluent.connect.jdbc.util.CachedConnectionProvider.getValidConnection(CachedConnectionProvider.java:68)
at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:56)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:69)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:495)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:288)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:198)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:166)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Sink.properties:
name=test-sink
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
tasks.max=1
topics=fp_test
connection.url=jdbc:postgresql://localhost:5432/casb
connection.user=admin
connection.password=***
auto.create=true
I have set plugin.path=/usr/share/java/kafka-connect-jdbc
On /usr/share/java/kafka-connect-jdbc I have the following files:
kafka-connect-jdbc-4.0.0.jar , postgresql-9.4-1206-jdbc41.jar, sqlite-jdbc-3.8.11.2.jar and some other jars that basically come packaged along with confluent.
I then downloaded postgres-jdbc driver jar postgresql-42.2.2.jar, copied it in the same folder and tried again. Still the same exception.
Kindly help me out with this.
Setting plugin.path=/usr/share/java and CLASSPATH=/usr/share/java/kafka-connect-jdbc/ solved the issue.
I am working on embedded kafka tests. ANd I was having an issue with rebalance with kafka 0.10.1.1
similar to this Exception in thread "StreamThread-1" org.apache.kafka.streams.errors.StreamsException: Failed to rebalance
According to suggestions I found online, I updated the kafka version to 0.10.2.0.
However now the tests throw this error:
java.lang.NoSuchMethodError: kafka.cluster.Broker.endPoints()Lscala/collection/Map;
at io.confluent.kafka.schemaregistry.storage.KafkaStore.brokersToEndpoints(KafkaStore.java:277)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.<init>(KafkaStore.java:122)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.<init>(KafkaSchemaRegistry.java:144)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:53)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
at io.confluent.rest.Application.createServer(Application.java:149)
at io.confluent.kafka.schemaregistry.RestApp.start(RestApp.java:59)
at com.kafka.EmbeddedSingleNodeKafkaCluster.start(EmbeddedSingleNodeKafkaCluster.java:82)
at com.kafka.EmbeddedSingleNodeKafkaCluster.before(EmbeddedSingleNodeKafkaCluster.java:100)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:46)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:191)
at com.CustomizedIntegrationTestClassRunner.run(CustomizedIntegrationTestClassRunner.java:22)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
Are there any changes I need to make in the config properties ?
A task that works in spark local mode is not working for standalone cluster running on the same machine.
The only difference is:
local[*]
vs
spark://<host>.local:7077
for the master
I am able to run spark pi against the master at the above address and also use the spark gui: so the master address is generally working for spark.
Here is the (normal) spark init code:
val sconf = new SparkConf().setMaster(master).setAppName("EpisCatalog")
val sc = new SparkContext(sconf)
Here is the stacktrace from running the program:
15/12/03 03:39:04.746 main WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/12/03 03:39:07.706 main WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
15/12/03 03:39:27.739 appclient-registration-retry-thread ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[appclient-registration-retry-thread,5,main]
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask#b649f0b rejected from java.util.concurrent.ThreadPoolExecutor#5ef7a52b[Running, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anonfun$tryRegisterAllMasters$1.apply(AppClient.scala:103)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anonfun$tryRegisterAllMasters$1.apply(AppClient.scala:102)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint.tryRegisterAllMasters(AppClient.scala:102)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint.org$apache$spark$deploy$client$AppClient$ClientEndpoint$$registerWithMaster(AppClient.scala:128)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$anonfun$run$1.apply$mcV$sp(AppClient.scala:139)
at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1130)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2.run(AppClient.scala:131)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I am running spark 1.6.0-SNAPSHOT. It has been "installed" to local maven repo and I have verified that the client is using the latest local maven repo version.
I had the same problem. It could be solved by using the full host url (can be found on the Master Web UI, port 18080) instead of just the hostname or localhost.
So I had to use mymachine.mycompany.org instead of mymachine
I got the same problem and in my case there was version mismatch. I had Spark Driver written on 1.5.1 version and Spark Cluster setup on 1.6.0.
Maybe you deploy cluster on stable version which was on that time 1.5.1.
I have a Storm cluster with 2 nodes and 1 ZooKeeper. One of the worker dies because of the following error. Does any one have an idea on why stormconf.ser file is getting deleted?
I am using 0.9.2 Storm and 3.4.6 ZK version.
o.a.c.f.s.ConnectionStateManager [INFO] State change: CONNECTED
2015-01-31 01:23:06 o.a.c.f.s.ConnectionStateManager [WARN] There are no ConnectionStateListeners registered.
2015-01-31 01:23:07 b.s.d.worker [ERROR] Error on initialization of server mk-worker
java.io.FileNotFoundException: File '/home/Programs/apache-storm-0.9.2-incubating/stormtmp/supervisor/stormdist/storm-topology-1-1422602934/stormconf.ser' does not exist
at org.apache.commons.io.FileUtils.openInputStream(FileUtils.java:299) ~[commons-io-2.4.jar:2.4]
at org.apache.commons.io.FileUtils.readFileToByteArray(FileUtils.java:1763) ~[commons-io-2.4.jar:2.4]
at backtype.storm.config$read_supervisor_storm_conf.invoke(config.clj:212) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at backtype.storm.daemon.worker$worker_data.invoke(worker.clj:180) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at backtype.storm.daemon.worker$fn__5940$exec_fn__1396__auto____5941.invoke(worker.clj:356) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at clojure.lang.AFn.applyToHelper(AFn.java:185) [clojure-1.5.1.jar:na]
at clojure.lang.AFn.applyTo(AFn.java:151) [clojure-1.5.1.jar:na]
at clojure.core$apply.invoke(core.clj:617) ~[clojure-1.5.1.jar:na]
at backtype.storm.daemon.worker$fn__5940$mk_worker__5996.doInvoke(worker.clj:347) [storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at clojure.lang.RestFn.invoke(RestFn.java:512) [clojure-1.5.1.jar:na]
at backtype.storm.daemon.worker$_main.invoke(worker.clj:454) [storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at clojure.lang.AFn.applyToHelper(AFn.java:172) [clojure-1.5.1.jar:na]
at clojure.lang.AFn.applyTo(AFn.java:151) [clojure-1.5.1.jar:na]
at backtype.storm.daemon.worker.main(Unknown Source) [storm-core-0.9.2-incubating.jar:0.9.2-incubating]
2015-01-31 01:23:07 b.s.util [INFO] Halting process: ("Error on initialization")
That was known issue in the old releases of Storm. Usually what you need to do is to clear the directories that Storm is using. Check for the name of those files in Storm's configuration file conf/storm.yaml.
Just to make it clear: this issue appears to be fixed in Storm 0.9.4.
Details in the issue linked by #tousif: https://issues.apache.org/jira/browse/STORM-130
Downgrade the version of hdp to 2.5 or less
Use storm verion 1.0.1
It solved the issue...
clearing the directory is useless idea.