eclipse console suddenly showing INFO - eclipse

I'm using a apache HttpClient and I've started seeing some INFO output on the eclipse console:
0 [main] INFO org.apache.commons.httpclient.HttpMethodDirector - I/O exception (java.net.ConnectException) caught when processing request: Connection refused: connect
3 [main] INFO org.apache.commons.httpclient.HttpMethodDirector - Retrying request
3861 [pool-1-thread-25] INFO org.apache.commons.httpclient.HttpMethodDirector - I/O exception (java.net.ConnectException) caught when processing request: Connection refused: connect
3861 [pool-1-thread-25] INFO org.apache.commons.httpclient.HttpMethodDirector - Retrying request
3913 [pool-1-thread-16] INFO org.apache.commons.httpclient.HttpMethodBase - Response content length is not known
To my knowledge, nothing has changed. How can I get rid of it?

It's probably your logging library. HttpClient likely depends on commons-logging, which automatically picks up a logging implementation in your classpath (either java.util.logging or log4j) which by default writes on the console.

Related

Unable to start Drill in distributed mode

I am trying to setup drillv1.18 running. Facing the error below.
The drill-override.conf points to the zookeeper which runs on port 12181. On starting in distributed mode, it fails with the following log output. But the embedded mode has no issues.
It appears like permission issue, but both zookeeper, drill, zookeeper data-dir all are running under the same user.
2020-05-10 16:23:01,160 [main] DEBUG o.apache.drill.exec.server.Drillbit - Construction started.
2020-05-10 16:23:01,448 [main] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Connect localhost:12181, zkRoot drill, clusterId: drillbits1
2020-05-10 16:23:01,531 [main] INFO o.a.d.e.s.s.PersistentStoreRegistry - Using the configured PStoreProvider class: 'org.apache.drill.exec.store.sys.store.provider.ZookeeperPersistentStoreProvider'.
2020-05-10 16:23:01,718 [main] DEBUG o.a.drill.exec.ssl.SSLConfigServer - Using Hadoop configuration for SSL
2020-05-10 16:23:01,718 [main] DEBUG o.a.drill.exec.ssl.SSLConfigServer - Hadoop SSL configuration file: ssl-server.xml
2020-05-10 16:23:01,731 [main] DEBUG org.apache.drill.exec.ssl.SSLConfig - Initialized SSL context.
2020-05-10 16:23:01,731 [main] INFO o.a.drill.exec.rpc.user.UserServer - Rpc server configured to use TLS protocol 'TLSv1.2'
2020-05-10 16:23:01,738 [main] INFO o.apache.drill.exec.server.Drillbit - Construction completed (577 ms).
2020-05-10 16:23:01,738 [main] DEBUG o.apache.drill.exec.server.Drillbit - Startup begun.
2020-05-10 16:23:01,738 [main] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Starting ZKClusterCoordination.
2020-05-10 16:23:03,775 [main] ERROR o.apache.drill.exec.server.Drillbit - Failure during initial startup of Drillbit.
org.apache.zookeeper.KeeperException$UnimplementedException: KeeperErrorCode = Unimplemented for /drill
at org.apache.zookeeper.KeeperException.create(KeeperException.java:106)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1538)
at org.apache.curator.utils.ZKPaths.mkdirs(ZKPaths.java:351)
at org.apache.curator.framework.imps.ExistsBuilderImpl$2.call(ExistsBuilderImpl.java:230)
at org.apache.curator.framework.imps.ExistsBuilderImpl$2.call(ExistsBuilderImpl.java:224)
at org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:67)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:81)
at org.apache.curator.framework.imps.ExistsBuilderImpl.pathInForeground(ExistsBuilderImpl.java:221)
at org.apache.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:206)
at org.apache.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:35)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.createContainers(CuratorFrameworkImpl.java:265)
at org.apache.curator.framework.EnsureContainers.internalEnsure(EnsureContainers.java:69)
at org.apache.curator.framework.EnsureContainers.ensure(EnsureContainers.java:53)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.ensurePath(PathChildrenCache.java:596)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.rebuild(PathChildrenCache.java:327)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.start(PathChildrenCache.java:304)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.start(PathChildrenCache.java:252)
at org.apache.curator.x.discovery.details.ServiceCacheImpl.start(ServiceCacheImpl.java:99)
at org.apache.drill.exec.coord.zk.ZKClusterCoordinator.start(ZKClusterCoordinator.java:145)
at org.apache.drill.exec.server.Drillbit.run(Drillbit.java:220)
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:584)
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:554)
at org.apache.drill.exec.server.Drillbit.main(Drillbit.java:550)
Version 1.17 has no issues in starting in distributed mode.
The issue here is with the zookeeper version. Perhaps you use 3.4.X version, but the current version of Drill requires 3.5.X. As a workaround, you may replace zookeeper jar in jars/ext/zookeeper-3.5.7.jar and jars/ext/zookeeper-jute-3.5.7.jar with the jars that corresponds to your zookeeper version.
In Addition to the answer of Vova Vysotskyi, you may find more information in Drill documentation about this issue:
https://drill.apache.org/docs/distributed-mode-prerequisites/
Starting in Drill 1.18 the bundled ZooKeeper libraries are upgraded to version 3.5.7, preventing connections to older (< 3.5) ZooKeeper clusters. In order to connect to a ZooKeeper < 3.5 cluster, replace the ZooKeeper library JARs in ${DRILL_HOME}/jars/ext with zookeeper-3.4.x.jar then restart the cluster.

Connect Timeout Exception on Url - http://localhost:8888 & Could not locate PropertySource: I/O error on GET request

When the Spring Cloud Dataflow Http-Source app Pod starts on kubernetes notice following two messages in console.
Connect Timeout Exception on Url - http://localhost:8888.
Could not locate PropertySource: I/O error on GET request for "http://localhost:8888/http-source/default": Connection refused (Connection refused); nested exception is java.net.ConnectException
How to get this resolved?
subscriber to the 'errorChannel' channel
2019-09-15 05:17:26.773 INFO 1 --- [ main] o.s.i.channel.PublishSubscribeChannel : Channel 'application-1.errorChannel' has 1 subscriber(s).
2019-09-15 05:17:26.774 INFO 1 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started _org.springframework.integration.errorLogger
2019-09-15 05:17:27.065 INFO 1 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Fetching config from server at : http://localhost:8888
2019-09-15 05:17:27.137 INFO 1 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Connect Timeout Exception on Url - http://localhost:8888. Will be trying the next url if available
2019-09-15 05:17:27.141 WARN 1 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Could not locate PropertySource: I/O error on GET request for "http://localhost:8888/http-source/default": Connection refused (Connection refused); nested exception is java.net.ConnectException: Connection refused (Connection refused)
If you look carefully, the following message will be logged as a WARN in the logs.
Connect Timeout Exception on Url - http://localhost:8888.
Could not locate PropertySource: I/O error on GET request for "http://localhost:8888/http-source/default": Connection refused (Connection refused); nested exception is java.net.ConnectException
You'd see this WARN message for all the apps that we ship, SCDF, and Skipper servers that runs on K8s. This means that the apps, SCDF or Skipper don't have a config-server configured, so it defaults to the default http://localhost:8888.
Background: we provide the config-server dependency in all the apps that we ship to help you get started with it quickly.
If you don't use the config-server, that's fine; it will not cause any harm - nothing to worry, however.

Running kafka server throws NoClassDefFoundError exceptions

I am getting lot of NoClassDefFoundError in a running Kafka server. I am not sure what is causing this? Has anyone else faced this issue?
I am using Kafka v2.1.1 and Java 11. I recently upgraded to v2.1.1
Update:
Note that I am seeing this exception in only 2 brokers out of the 5 in entire cluster.
Exception 1:
[2019-02-28 00:54:49,288] INFO Successfully authenticated client: authenticationID=rajputs/hostbased#UNIX.DESHAW.COM; authorizationID=rajputs/hostbased#UNIX.DESHAW.COM. (org.apache.kafka.common.security.authenticator.SaslServerCallbackHandler)
[2019-02-28 00:54:49,469] ERROR [KafkaApi-4] Error when handling request: clientId=consumer-2, correlationId=1686, api=METADATA, body={topics=[fps.rajputs.jpse.desim_se],allow_auto_topic_creation=true} (kafka.server.KafkaApis)
java.lang.NoClassDefFoundError: scala/collection/immutable/SortedMap$$anon$1
at scala.collection.immutable.SortedMap.filterKeys(SortedMap.scala:87)
at scala.collection.immutable.SortedMap.filterKeys$(SortedMap.scala:87)
at scala.collection.immutable.TreeMap.filterKeys(TreeMap.scala:46)
at kafka.security.auth.SimpleAclAuthorizer.$anonfun$getMatchingAcls$1(SimpleAclAuthorizer.scala:245)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:257)
at kafka.security.auth.SimpleAclAuthorizer.getMatchingAcls(SimpleAclAuthorizer.scala:233)
at kafka.security.auth.SimpleAclAuthorizer.aclsAllowAccess$1(SimpleAclAuthorizer.scala:154)
at kafka.security.auth.SimpleAclAuthorizer.authorize(SimpleAclAuthorizer.scala:159)
at kafka.server.KafkaApis.$anonfun$authorize$1(KafkaApis.scala:371)
at kafka.server.KafkaApis.$anonfun$authorize$1$adapted(KafkaApis.scala:371)
at scala.Option.forall(Option.scala:247)
at kafka.server.KafkaApis.authorize(KafkaApis.scala:371)
at kafka.server.KafkaApis.$anonfun$handleTopicMetadataRequest$1(KafkaApis.scala:970)
at kafka.server.KafkaApis.$anonfun$handleTopicMetadataRequest$1$adapted(KafkaApis.scala:970)
at scala.collection.TraversableLike.$anonfun$partition$1(TraversableLike.scala:313)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:95)
at scala.collection.TraversableLike.partition(TraversableLike.scala:313)
at scala.collection.TraversableLike.partition$(TraversableLike.scala:311)
at scala.collection.AbstractTraversable.partition(Traversable.scala:104)
at kafka.server.KafkaApis.handleTopicMetadataRequest(KafkaApis.scala:970)
at kafka.server.KafkaApis.handle(KafkaApis.scala:109)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
at java.base/java.lang.Thread.run(Thread.java:834)
Exception 2:
[2019-02-28 01:05:58,920] INFO [GroupMetadataManager brokerId=4] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-02-28 01:06:04,296] WARN Client session timed out, have not heard from server in 4002ms for sessionid 0x468d0a6602700cc (org.apache.zookeeper.ClientCnxn)
[2019-02-28 01:06:04,296] INFO Client session timed out, have not heard from server in 4002ms for sessionid 0x468d0a6602700cc, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2019-02-28 01:06:04,972] INFO Client will use GSSAPI as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-02-28 01:06:04,972] INFO Opening socket connection to server mwkafka-zk-test-03.dr/10.218.247.19:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2019-02-28 01:06:04,973] INFO Socket connection established to mwkafka-zk-test-03.dr/10.218.247.19:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-02-28 01:06:04,973] WARN Session 0x468d0a6602700cc for server mwkafka-zk-test-03.dr/10.218.247.19:2181, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.lang.NoClassDefFoundError: org/apache/zookeeper/proto/SetWatches
at org.apache.zookeeper.ClientCnxn$SendThread.primeConnection(ClientCnxn.java:929)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:363)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1145)
Caused by: java.lang.ClassNotFoundException: org.apache.zookeeper.proto.SetWatches
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 3 more

Leader election with Curator and Zookeeper

I am running 3 instances of ZooKeeper and the config is this:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/tmp/zookeeper1
clientPort=2181
maxClientCnxns=1000
server.1=127.0.0.1:2888:3888
server.2=127.0.0.1:2889:3889
server.3=127.0.0.1:2890:3890
I am using the leader election example code given here:
https://git-wip-us.apache.org/repos/asf?p=curator.git;a=tree;f=curator-examples/src/main/java/leader;h=73b547eadb98995c0ccbd06a5b76d0741ffef263;hb=HEAD
The code runs fine with TestingServer but when I change connection string to : "127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183", I get the exceptions:
[main-SendThread(127.0.0.1:2183)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server 127.0.0.1/127.0.0.1:2183. Will not attempt to authenticate using SASL (unknown error)
[main-SendThread(127.0.0.1:2183)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /127.0.0.1:56111, server: 127.0.0.1/127.0.0.1:2183
[main-SendThread(127.0.0.1:2183)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server 127.0.0.1/127.0.0.1:2183, sessionid = 0x3521552283c0000, negotiated timeout = 40000
[main-EventThread] INFO org.apache.curator.framework.state.ConnectionStateManager - State change: CONNECTED
[main-SendThread(127.0.0.1:2183)] INFO org.apache.zookeeper.ClientCnxn - Unable to read additional data from server sessionid 0x3521552283c0000, likely server has closed socket, closing socket connection and attempting reconnect
[main-EventThread] INFO org.apache.curator.framework.imps.EnsembleTracker - New config event received: null
[main-EventThread] ERROR org.apache.curator.framework.imps.CuratorFrameworkImpl - Background exception was not retry-able or retry gave up
java.lang.NullPointerException
at java.io.ByteArrayInputStream.<init>(ByteArrayInputStream.java:106)
at org.apache.curator.framework.imps.EnsembleTracker.processConfigData(EnsembleTracker.java:163)
at org.apache.curator.framework.imps.EnsembleTracker.access$200(EnsembleTracker.java:48)
at org.apache.curator.framework.imps.EnsembleTracker$2.processResult(EnsembleTracker.java:134)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.sendToBackgroundCallback(CuratorFrameworkImpl.java:829)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:611)
at org.apache.curator.framework.imps.WatcherRemovalFacade.processBackgroundOperation(WatcherRemovalFacade.java:151)
at org.apache.curator.framework.imps.GetConfigBuilderImpl$2.processResult(GetConfigBuilderImpl.java:210)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:619)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:528)
[main-EventThread] INFO org.apache.curator.framework.state.ConnectionStateManager - State change: SUSPENDED
What could be the issue?
I am hitting the same issue. I think it might be related to the Zookeeper 3.5.1 ClientCnxn. Even though I return back to curator 2.6.0, I still see the same stack trace. A GET_CONFIG event type is sent without the event data.
My stack trace looks like this:
org.apache.curator.framework.imps.CuratorFrameworkImpl: Background exception was not retry-able or retry gave up
! java.lang.NullPointerException: null
! at java.io.ByteArrayInputStream.(ByteArrayInputStream.java:106)
! at org.apache.curator.framework.imps.EnsembleTracker.processConfigData(EnsembleTracker.java:163)
! at org.apache.curator.framework.imps.EnsembleTracker.access$200(EnsembleTracker.java:48)
! at org.apache.curator.framework.imps.EnsembleTracker$2.processResult(EnsembleTracker.java:134)
! at org.apache.curator.framework.imps.CuratorFrameworkImpl.sendToBackgroundCallback(CuratorFrameworkImpl.java:829)
! at org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:611)
! at org.apache.curator.framework.imps.WatcherRemovalFacade.processBackgroundOperation(WatcherRemovalFacade.java:151)
! at org.apache.curator.framework.imps.GetConfigBuilderImpl$2.processResult(GetConfigBuilderImpl.java:210)
! at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:619)
! at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:528)
If use Zookeeper 3.5.1, then curator-recipes 3.2.1+ fix this issue.

Zookeeper: Connection request from old client will be dropped if server is in r-o mode

storm version: 0.82
zookeeper version: 3.4.5.
We have a small storm cluster (1 nimbus and 3 supervisors), so using just 1 zookeeper instance that's co-located with storm nimbus.
Infrequently we start getting the following errors in the zookeeper logs and our storm cluster comes to a standstill.
2014-04-05 13:27:32,885 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFact
ory#197] - Accepted socket connection from /10.0.1.183:56121
2014-04-05 13:27:32,886 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#7
93] - Connection request from old client /10.0.1.183:56121; will be dropped if server is in r-o mode
2014-04-05 13:27:32,886 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#8
32] - Client attempting to renew session 0x1452dd02834002e at /10.0.1.183:56121
2014-04-05 13:27:32,886 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#5
95] - Established session 0x1452dd02834002e with negotiated timeout 40000 for client /10.0.1.183:561
21
On the storm end we start seeing the following in supervisor and worker logs:
2014-04-05 11:37:29 ConnectionStateManager [WARN] There are no ConnectionStateListeners registered.
2014-04-05 11:37:29 cluster [WARN] Received event :disconnected::none: with disconnected Zookeeper.
2014-04-05 11:37:31 ClientCnxn [WARN] Session 0x1452dd028340015 for server null, unexpected error,
losing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119)
2014-04-05 11:37:42 CuratorFrameworkImpl [ERROR] Background operation retry gave up
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
at com.netflix.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(Curat
rFrameworkImpl.java:380)
at com.netflix.curator.framework.imps.BackgroundSyncImpl$1.processResult(BackgroundSyncImpl
java:49)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:617)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506)
Do we need to downgrade zookeeper to 3.3.3 or is there a known issue/config that we're missing?
We also experienced several issues with Storm 0.9 and Zookeeper 3.4.X, even though not exactly the one you describe.
Storm mailing list are also reporting such incompatibility issues:
https://mail.google.com/mail/u/0/#search/label%3Astorm+zookeeper+3.4/144313a45ba069b5
https://mail.google.com/mail/u/0/#search/label%3Astorm+zookeeper+3.4/1447d95d10ce7582
This later one is pointing us to this Storm pull request, which should hopefully let us use ZK 3.4.X with future versions of Storm when it will be released:
https://github.com/apache/incubator-storm/pull/29
Until then, I would recommend downgrading ZK to 3.3.6 (you may install a specific separate instance of ZK for Storm if you absolutely need ZK 3.4.X for another system). You could also clone the Storm code and merge that pull request locally or compile the latest version of the trunk, but that's a bit adventurous and more tiresome than just waiting for those nice folks to just deliver a new release for us :)
A workaround for this situation is to clear storm's data directory (configured in strom.yaml==>storm.local.dir), then restart the supervisor. I did that in my test environment by clear storm's data directory and restart the nimbus and supervisor.
I think it's caused by a previous crash of the storm cluster, and the supervisor can not recovery from such a spot.