Kafka-Manager Web UI not loading - apache-kafka

I have started kafka-manager on centos VM and below are its logs.
[info] o.a.z.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
[info] o.a.z.ZooKeeper - Client environment:java.io.tmpdir=/tmp
[info] o.a.z.ZooKeeper - Client environment:java.compiler=
[info] o.a.z.ZooKeeper - Client environment:os.name=Linux
[info] o.a.z.ZooKeeper - Client environment:os.arch=amd64
[info] o.a.z.ZooKeeper - Client environment:os.version=3.10.0-862.el7.x86_64
[info] o.a.z.ZooKeeper - Client environment:user.name=root
[info] o.a.z.ZooKeeper - Client environment:user.home=/root
[info] o.a.z.ZooKeeper - Client environment:user.dir=/root/Confluent_kafka/kafka-manager-1.3.3.21
[info] o.a.z.ZooKeeper - Initiating client connection, connectString=localhost:3181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState#73687e45
[info] o.a.z.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:3181. Will not attempt to authenticate using SASL (unknown error)
[info] o.a.z.ClientCnxn - Socket connection established to localhost/127.0.0.1:3181, initiating session
[info] k.m.a.KafkaManagerActor - zk=localhost:3181
[info] k.m.a.KafkaManagerActor - baseZkPath=/kafka-manager
[info] o.a.z.ClientCnxn - Session establishment complete on server localhost/127.0.0.1:3181, sessionid = 0x16565ff95660000, negotiated timeout = 60000
[info] k.m.a.KafkaManagerActor - Started actor akka://kafka-manager-system/user/kafka-manager
[info] k.m.a.KafkaManagerActor - Starting delete clusters path cache...
[info] k.m.a.DeleteClusterActor - Started actor akka://kafka-manager-system/user/kafka-manager/delete-cluster
[info] k.m.a.DeleteClusterActor - Starting delete clusters path cache...
[info] k.m.a.KafkaManagerActor - Starting kafka manager path cache...
[info] k.m.a.DeleteClusterActor - Adding kafka manager path cache listener...
[info] k.m.a.DeleteClusterActor - Scheduling updater for 10 seconds
[info] k.m.a.KafkaManagerActor - Adding kafka manager path cache listener...
[info] play.api.Play - Application started (Prod)
[info] p.c.s.NettyServer - Listening for HTTP on /0.0.0.0:9000
[info] k.m.a.KafkaManagerActor - Updating internal state...
[info] k.m.a.KafkaManagerActor - Updating internal state...
[info] k.m.a.KafkaManagerActor - Updating internal state...
[info] k.m.a.KafkaManagerActor - Shutting down kafka manager
Tha kafka manager starts perfectly but the WEB UI does not load at all.
The IPV6 is disabled and the netstat shows this
tcp 0 0 0.0.0.0:9000 0.0.0.0:* LISTEN 2670/java
Can someone help in this.

Related

Play: Restart on code change takes long to shutdown and starting the Hikary Pool

Waiting for the server to restart when working with Play cost us a lot of time.
One thing I see in the log is that shutting down and starting the HikaryPool takes a lot of time (> 40 seconds).
Here is the log:
2019-10-31 09:11:47,327 [info] application - Shutting down connection pool.
2019-10-31 09:11:47,328 [info] c.z.h.HikariDataSource - HikariPool-58 - Shutdown initiated...
2019-10-31 09:11:53,629 [info] c.z.h.HikariDataSource - HikariPool-58 - Shutdown completed.
2019-10-31 09:11:53,629 [info] application - Shutting down connection pool.
2019-10-31 09:11:53,629 [info] c.z.h.HikariDataSource - HikariPool-59 - Shutdown initiated...
2019-10-31 09:11:53,636 [info] c.z.h.HikariDataSource - HikariPool-59 - Shutdown completed.
2019-10-31 09:11:53,636 [info] application - Shutting down connection pool.
2019-10-31 09:11:53,636 [info] c.z.h.HikariDataSource - HikariPool-60 - Shutdown initiated...
2019-10-31 09:11:53,640 [info] c.z.h.HikariDataSource - HikariPool-60 - Shutdown completed.
....
2019-10-31 09:12:26,454 [info] p.a.d.DefaultDBApi - Database [amseewen] initialized at jdbc:postgresql://localhost:5432/bpf?currentSchema=amseewen
2019-10-31 09:12:26,454 [info] application - Creating Pool for datasource 'amseewen'
2019-10-31 09:12:26,454 [info] c.z.h.HikariDataSource - HikariPool-68 - Starting...
2019-10-31 09:12:26,455 [info] c.z.h.HikariDataSource - HikariPool-68 - Start completed.
2019-10-31 09:12:26,455 [info] p.a.d.DefaultDBApi - Database [companyOds] initialized at jdbc:sqlserver://localhost:1433;databaseName=companyOds
2019-10-31 09:12:26,455 [info] application - Creating Pool for datasource 'companyOds'
2019-10-31 09:12:26,455 [info] c.z.h.HikariDataSource - HikariPool-69 - Starting...
2019-10-31 09:12:26,456 [info] c.z.h.HikariDataSource - HikariPool-69 - Start completed.
2019-10-31 09:12:26,457 [info] p.a.d.DefaultDBApi - Database [company] initialized at jdbc:oracle:thin:#castor.olymp:1521:citrin
2019-10-31 09:12:26,457 [info] application - Creating Pool for datasource 'company'
2019-10-31 09:12:26,457 [info] c.z.h.HikariDataSource - HikariPool-70 - Starting...
2019-10-31 09:12:26,458 [info] c.z.h.HikariDataSource - HikariPool-70 - Start completed.
2019-10-31 09:12:26,458 [info] p.a.d.DefaultDBApi - Database [amseewen] initialized at jdbc:postgresql://localhost:5432/bpf?currentSchema=amseewen
2019-10-31 09:12:26,458 [info] application - Creating Pool for datasource 'amseewen'
2019-10-31 09:12:26,458 [info] c.z.h.HikariDataSource - HikariPool-71 - Starting...
2019-10-31 09:12:26,459 [info] c.z.h.HikariDataSource - HikariPool-71 - Start completed.
2019-10-31 09:12:26,459 [info] p.a.d.DefaultDBApi - Database [companyOds] initialized at jdbc:sqlserver://localhost:1433;databaseName=companyOds
2019-10-31 09:12:26,459 [info] application - Creating Pool for datasource 'companyOds'
2019-10-31 09:12:26,459 [info] c.z.h.HikariDataSource - HikariPool-72 - Starting...
2019-10-31 09:12:26,459 [info] c.z.h.HikariDataSource - HikariPool-72 - Start completed.
Is there a way to shorten this time?
Updates
I use The Play integration of Intellij. The build-tool is sbt.
Here is the configuration:
sbt 1.2.8
Thread Pools
We use the default thread pool for the application. For the Database access we use:
database.dispatcher {
executor = "thread-pool-executor"
throughput = 1
thread-pool-executor {
fixed-pool-size = 55 # db conn pool (50) + number of cores (4) + housekeeping (1)
}
}
Ok with the help of billoneil on the Hikari Github Page and suggestions of #Issilva, I could figure out the problem:
The problem are now datasources where the database is not reachable (during development). So we configured it, that the application also
starts when the database is not reachable (initializationFailTimeout = -1).
So there are 2 problems when shutting down:
The pools are shutdown sequentially.
A pool that has no connection takes 10 seconds to shutdown.
The suggested solution is not to initialise the datasources that can not be reached. Except a strange exception the shutdown time problem is solved (down to milliseconds).

How to fix the NullPointerException happened in KafkaSpout running on Heron?

When I run a topology of Storm with KafkaSpout in Heron, the following exception occurs:
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.instance.HeronInstance:
Starting instance container_2_ads_2 for topology AdvertisingTopology and topologyId AdvertisingTopologyf7b4acbe-bdbc-4772-aaa4-9dd2f113f405 for component ads with taskId 2 and componentIndex 0 and stmgrId stmgr-2 and stmgrPort 31162 and metricsManagerPort 31067
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.instance.HeronInstance: System Config: {heron.streammgr.network.backpressure.lowwatermark.mb=50, heron.streammgr.connection.write.batch.size.mb=1, heron.streammgr.stateful.buffer.size.mb=100, heron.instance.internal.bolt.write.queue.capacity=128, heron.instance.tuning.expected.spout.read.queue.size=512, heron.metricsmgr.network.write.batch.size.bytes=ByteAmount{32768 bytes}, heron.instance.reconnect.streammgr.interval.sec=PT5S, heron.instance.tuning.interval.ms=PT0.1S, heron.instance.emit.batch.size.bytes=ByteAmount{32768 bytes}, heron.logging.directory=log-files, heron.check.tmaster.location.interval.sec=120, heron.instance.reconnect.metricsmgr.interval.sec=PT5S, heron.streammgr.client.reconnect.tmaster.max.attempts=30, heron.streammgr.network.backpressure.highwatermark.mb=100, heron.instance.network.read.batch.size.bytes=ByteAmount{32768 bytes}, heron.instance.tuning.expected.metrics.write.queue.size=8, heron.instance.internal.spout.write.queue.capacity=128, heron.instance.force.exit.timeout.ms=PT2S, heron.tmaster.network.stats.options.maximum.packet.mb=1, heron.streammgr.xormgr.rotatingmap.nbuckets=3, heron.instance.set.control.tuple.capacity=1024, heron.metricsmgr.network.read.batch.size.bytes=ByteAmount{32768 bytes}, heron.streammgr.client.reconnect.tmaster.interval.sec=10, heron.instance.execute.batch.time.ms=PT0.016S, heron.metrics.export.interval.sec=PT1M, heron.streammgr.connection.read.batch.size.mb=1, heron.streammgr.cache.drain.size.mb=100, heron.tmaster.network.master.options.maximum.packet.mb=16, heron.tmaster.establish.retry.interval.sec=1, heron.metrics.max.exceptions.per.message.count=1024, heron.tmaster.stmgr.state.timeout.sec=60, heron.instance.network.write.batch.size.bytes=ByteAmount{32768 bytes}, heron.logging.err.threshold=3, heron.tmaster.network.controller.options.maximum.packet.mb=1, heron.tmaster.metrics.collector.maximum.exception=256, heron.instance.network.write.batch.time.ms=PT0.016S, heron.instance.network.options.socket.send.buffer.size.bytes=ByteAmount{6 MB (6553600 bytes)}, heron.streammgr.mempool.max.message.number=512, heron.logging.maximum.size.mb=100, heron.streammgr.tmaster.heartbeat.interval.sec=10, heron.instance.network.read.batch.time.ms=PT0.016S, heron.tmaster.metrics.network.bindallinterfaces=false, heron.streammgr.network.options.maximum.packet.mb=10, heron.instance.tuning.expected.bolt.write.queue.size=8, heron.metricsmgr.network.options.socket.received.buffer.size.bytes=ByteAmount{8 MB (8738000 bytes)}, heron.logging.maximum.files=5, heron.instance.network.options.socket.received.buffer.size.bytes=ByteAmount{8 MB (8738000 bytes)}, heron.instance.execute.batch.size.bytes=ByteAmount{32768 bytes}, heron.instance.acknowledgement.nbuckets=10, heron.metricsmgr.network.read.batch.time.ms=PT0.016S, heron.metricsmgr.network.options.socket.send.buffer.size.bytes=ByteAmount{6 MB (6553600 bytes)}, heron.metricsmgr.network.options.maximum.packetsize.bytes=ByteAmount{1 MB (1048576 bytes)}, heron.instance.tuning.expected.bolt.read.queue.size=8, heron.logging.flush.interval.sec=10, heron.streammgr.cache.drain.frequency.ms=10, heron.tmaster.establish.retry.times=30, heron.instance.network.options.maximum.packetsize.bytes=ByteAmount{10 MB (10485760 bytes)}, heron.instance.tuning.current.sample.weight=0.8, heron.instance.reconnect.streammgr.times=60, heron.logging.prune.interval.sec=300, heron.instance.reconnect.metricsmgr.times=60, heron.tmaster.metrics.collector.maximum.interval.min=PT3H, heron.tmaster.metrics.collector.purge.interval.sec=PT1M, heron.streammgr.client.reconnect.interval.sec=1, heron.instance.internal.spout.read.queue.capacity=1024, heron.instance.ack.batch.time.ms=PT0.128S, heron.instance.set.data.tuple.size.bytes=ByteAmount{8 MB (8388608 bytes)}, heron.instance.tuning.expected.spout.write.queue.size=8, heron.instance.internal.bolt.read.queue.capacity=128, heron.instance.set.data.tuple.capacity=1024, heron.instance.metrics.system.sample.interval.sec=PT10S, heron.streammgr.network.backpressure.threshold=3, heron.instance.emit.batch.time.ms=PT0.016S, heron.metricsmgr.network.write.batch.time.ms=PT0.016S, heron.instance.internal.metrics.write.queue.capacity=128}
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.common.network.HeronClient: Connecting to endpoint: /127.0.0.1:31162
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.common.network.HeronClient: Connecting to endpoint: /127.0.0.1:31067
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.network.StreamManagerClient: Connected to Stream Manager. Ready to send register request
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.network.MetricsManagerClient: Connected to Metrics Manager. Ready to send register request
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.network.StreamManagerClient: Stop writing due to not yet connected to Stream Manager.
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.network.StreamManagerClient: Stop writing due to not yet connected to Stream Manager.
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.network.StreamManagerClient: We registered ourselves to the Stream Manager
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.network.StreamManagerClient: Handling assignment message from response
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.network.StreamManagerClient: We received a new Physical Plan.
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.network.StreamManagerClient: Push to Slave
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.network.MetricsManagerClient: We registered ourselves to the Metrics Manager
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.common.utils.misc.PhysicalPlanHelper: Building configs for component: ads
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.common.utils.misc.PhysicalPlanHelper: Added topology-level configs: {topology.acker.executors=2, topology.workers=3, topology.skip.missing.kryo.registrations=false, topology.enable.message.timeouts=true, topology.serializer.classname=org.apache.storm.serialization.HeronPluggableSerializerDelegate, topology.debug=false, topology.max.spout.pending=100, topology.kryo.factory=org.apache.storm.serialization.DefaultKryoFactory, topology.fall.back.on.java.serialization=false, topology.name=AdvertisingTopology, topology.component.parallelism=1, topology.stmgrs=3, topology.reliability.mode=ATLEAST_ONCE, topology.message.timeout.secs=30}
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.common.utils.misc.PhysicalPlanHelper: Added component-specific configs: {topology.acker.executors=2, config.zkRoot=/ad-events/6647e83d-6bd8-454e-ad91-d3ec0a012e62, topology.workers=3, topology.skip.missing.kryo.registrations=false, topology.enable.message.timeouts=true, topology.serializer.classname=org.apache.storm.serialization.HeronPluggableSerializerDelegate, topology.debug=false, topology.max.spout.pending=100, topology.kryo.factory=org.apache.storm.serialization.DefaultKryoFactory, topology.fall.back.on.java.serialization=false, topology.name=AdvertisingTopology, topology.component.parallelism=1, config.topics=ad-events, topology.stmgrs=3, topology.reliability.mode=ATLEAST_ONCE, topology.message.timeout.secs=30, config.zkNodeBrokers=/brokers}
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.instance.Slave: Incarnating ourselves as ads with task id 2
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.instance.spout.SpoutInstance: Is this topology stateful: false
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.instance.spout.SpoutInstance: Enable Ack: true
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.instance.spout.SpoutInstance: EnableMessageTimeouts: true
[2018-11-01 22:43:49 +0800] [SEVERE] com.twitter.heron.instance.HeronInstance: Exception caught in thread: SlaveThread with id: 12
java.lang.NullPointerException
at org.apache.storm.kafka.KafkaSpout.open(KafkaSpout.java:80)
at org.apache.storm.topology.IRichSpoutDelegate.open(IRichSpoutDelegate.java:53)
at com.twitter.heron.instance.spout.SpoutInstance.init(SpoutInstance.java:173)
at com.twitter.heron.instance.Slave.startInstanceIfNeeded(Slave.java:222)
at com.twitter.heron.instance.Slave.handleNewAssignment(Slave.java:173)
at com.twitter.heron.instance.Slave.handleNewPhysicalPlan(Slave.java:349)
at com.twitter.heron.instance.Slave.access$300(Slave.java:49)
at com.twitter.heron.instance.Slave$1.run(Slave.java:118)
at com.twitter.heron.common.basics.WakeableLooper.executeTasksOnWakeup(WakeableLooper.java:160)
at com.twitter.heron.common.basics.WakeableLooper.runOnce(WakeableLooper.java:89)
at com.twitter.heron.common.basics.WakeableLooper.loop(WakeableLooper.java:79)
at com.twitter.heron.instance.Slave.run(Slave.java:180)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.instance.HeronInstance: Waiting for process exit in PT2S
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.instance.Slave: Closing the Slave Thread
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.common.utils.metrics.MetricsCollector: Forcing to gather all metrics and flush out.
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.instance.Slave: Shutting down the instance
[2018-11-01 22:43:49 +0800] [WARNING] com.twitter.heron.common.basics.SysUtils: Failed to close com.twitter.heron.instance.Slave#4bef4d93
java.lang.NullPointerException
at org.apache.storm.kafka.KafkaSpout.close(KafkaSpout.java:136)
at org.apache.storm.topology.IRichSpoutDelegate.close(IRichSpoutDelegate.java:58)
at com.twitter.heron.instance.spout.SpoutInstance.clean(SpoutInstance.java:195)
at com.twitter.heron.instance.spout.SpoutInstance.shutdown(SpoutInstance.java:204)
at com.twitter.heron.instance.Slave.close(Slave.java:238)
at com.twitter.heron.common.basics.SysUtils.closeIgnoringExceptions(SysUtils.java:66)
at com.twitter.heron.instance.HeronInstance$SlaveExitTask.run(HeronInstance.java:428)
at com.twitter.heron.instance.HeronInstance$DefaultExceptionHandler.handleException(HeronInstance.java:396)
at com.twitter.heron.instance.HeronInstance$DefaultExceptionHandler.uncaughtException(HeronInstance.java:360)
at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1057)
at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1052)
at java.lang.Thread.dispatchUncaughtException(Thread.java:1959)
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.instance.Gateway: Closing the Gateway thread
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.common.utils.metrics.MetricsCollector: Forcing to gather all metrics and flush out.
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.network.MetricsManagerClient: Flushing all pending data in MetricsManagerClient
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.network.StreamManagerClient: Flushing all pending data in StreamManagerClient
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.common.network.SocketChannelHelper: Forcing to flush data to socket with best effort.
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.common.network.HeronClient: To stop the HeronClient.
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.network.MetricsManagerClient: MetricsManagerClient exits
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.common.network.SocketChannelHelper: Forcing to flush data to socket with best effort.
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.common.network.HeronClient: To stop the HeronClient.
[2018-11-01 22:43:49 +0800] [INFO] com.twitter.heron.network.StreamManagerClient: StreamManagerClient exits.
[2018-11-01 22:43:49 +0800] [SEVERE] com.twitter.heron.instance.HeronInstance: Instance Process exiting.
And the codes of the topology as follows:
String zkServerHosts = "MY_ZK_IP:2181";
ZkHosts hosts = new ZkHosts(zkServerHosts);
SpoutConfig spoutConfig = new SpoutConfig(hosts, kafkaTopic, "/" + kafkaTopic, UUID.randomUUID().toString());
spoutConfig.scheme = new SchemeAsMultiScheme(new StringScheme());
KafkaSpout kafkaSpout = new KafkaSpout(spoutConfig);
And the location of the NPE is 80 lines of the open method in KafkaSpout class:
public Object getValueAndReset() {
List<PartitionManager> pms = KafkaSpout.this.coordinator.getMyManagedPartitions();
Set<Partition> latestPartitions = new HashSet();
Iterator var3 = pms.iterator();
PartitionManager pm;
while(var3.hasNext()) { // the line of NPE happened
pm = (PartitionManager)var3.next();
latestPartitions.add(pm.getPartition());
}
this.kafkaOffsetMetric.refreshPartitions(latestPartitions);
var3 = pms.iterator();
while(var3.hasNext()) {
pm = (PartitionManager)var3.next();
this.kafkaOffsetMetric.setOffsetData(pm.getPartition(),
pm.getOffsetData());
}
return this.kafkaOffsetMetric.getValueAndReset();
}
I don't know what caused this problem and how to fix it. Any help is grateful.
NEW EDITED:
All imports have been pointed to the heron-storm classes, but the NPE still happened.
import org.apache.storm.Config;
import org.apache.storm.LocalCluster;
import org.apache.storm.StormSubmitter;
import org.apache.storm.kafka.KafkaSpout;
import org.apache.storm.kafka.SpoutConfig;
import org.apache.storm.kafka.StringScheme;
import org.apache.storm.kafka.ZkHosts;
import org.apache.storm.spout.SchemeAsMultiScheme;
import org.apache.storm.task.OutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.TopologyBuilder;
import org.apache.storm.topology.base.BaseRichBolt;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Tuple;
import org.apache.storm.tuple.Values;
The Storm based Kafka spout does not work with the native Heron topology API. You will need to use the heron-storm API in the compatible mode (add this dependency to your pom file) to build your topology and interface with the Storm-Kafka spout. It should just be a case of swapping the heron imports for heron-storm imports in your bolts.
Some examples of using the heron-storm api are shown here.
Storm and Heron activate their bolts/spouts in different ways, which can cause issues with Storm only code in native Heron topologies.
This is resolved.
final KafkaSpout<byte[], byte[]> spout =
new KafkaSpout<byte[], byte[]>(kafkaSpoutConfig) {
#Override
public void open(Map conf, TopologyContext context, SpoutOutputCollector collector) {
super.open(conf, context, collector);
super.activate();
}
};

Storm topology deployment timeout

I'm trying to setup Apache Storm (1.0.2) on my Macbook Pro but apparently running into timeout issues if I try to deploy the topology. Also the UI hangs up spitting the same exception.
3491 [main] INFO o.a.s.StormSubmitter - Generated ZooKeeper secret payload for MD5-digest: -8915636774701640550:-6510752657961785886
3580 [main] INFO o.a.s.s.a.AuthUtils - Got AutoCreds []
Exception in thread "main" java.lang.RuntimeException: org.apache.storm.thrift.transport.TTransportException: java.net.ConnectException: Operation timed out (Connection timed out)
at org.apache.storm.security.auth.TBackoffConnect.retryNext(TBackoffConnect.java:64)
at org.apache.storm.security.auth.TBackoffConnect.doConnectWithRetry(TBackoffConnect.java:56)
at org.apache.storm.security.auth.ThriftClient.reconnect(ThriftClient.java:99)
at org.apache.storm.security.auth.ThriftClient.<init>(ThriftClient.java:69)
at org.apache.storm.utils.NimbusClient.<init>(NimbusClient.java:106)
at org.apache.storm.utils.NimbusClient.getConfiguredClientAs(NimbusClient.java:78)
at org.apache.storm.StormSubmitter.topologyNameExists(StormSubmitter.java:371)
at org.apache.storm.StormSubmitter.submitTopologyAs(StormSubmitter.java:233)
at org.apache.storm.StormSubmitter.submitTopology(StormSubmitter.java:311)
at org.apache.storm.StormSubmitter.submitTopology(StormSubmitter.java:157)
Caused by: org.apache.storm.thrift.transport.TTransportException: java.net.ConnectException: Operation timed out (Connection timed out)
at org.apache.storm.thrift.transport.TSocket.open(TSocket.java:226)
at org.apache.storm.thrift.transport.TFramedTransport.open(TFramedTransport.java:81)
at org.apache.storm.security.auth.SimpleTransportPlugin.connect(SimpleTransportPlugin.java:103)
at org.apache.storm.security.auth.TBackoffConnect.doConnectWithRetry(TBackoffConnect.java:53)
... 9 more
Caused by: java.net.ConnectException: Operation timed out (Connection timed out)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.storm.thrift.transport.TSocket.open(TSocket.java:221)
... 12 more
I'm using the default storm.yaml configuration from the github repository; without any change and default zoo.cfg file for zookeeper as well.
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=5
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=2
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
clientPortAddress=localhost
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
I came across similar issues which prompted me to check my hosts file; which I've posted as below
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
255.255.255.255 broadcasthost
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 ip6-localhost ip6-localhost.localdomain localhost6 localhost6.localdomain6
When I start the zookeeper server; I believe it get's started as usual.
2017-11-27 16:05:14,314 [myid:] - INFO [main:QuorumPeerConfig#103] - Reading configuration from: /Users/aniket.alhat/Tools/zookeeper/bin/../conf/zoo.cfg
2017-11-27 16:05:14,318 [myid:] - INFO [main:DatadirCleanupManager#78] - autopurge.snapRetainCount set to 3
2017-11-27 16:05:14,318 [myid:] - INFO [main:DatadirCleanupManager#79] - autopurge.purgeInterval set to 0
2017-11-27 16:05:14,318 [myid:] - INFO [main:DatadirCleanupManager#101] - Purge task is not scheduled.
2017-11-27 16:05:14,318 [myid:] - WARN [main:QuorumPeerMain#113] - Either no config or no quorum defined in config, running in standalone mode
2017-11-27 16:05:14,329 [myid:] - INFO [main:QuorumPeerConfig#103] - Reading configuration from: /Users/aniket.alhat/Tools/zookeeper/bin/../conf/zoo.cfg
2017-11-27 16:05:14,330 [myid:] - INFO [main:ZooKeeperServerMain#95] - Starting server
2017-11-27 16:05:14,335 [myid:] - INFO [main:Environment#100] - Server environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2017-11-27 16:05:14,336 [myid:] - INFO [main:Environment#100] - Server environment:host.name=10.9.157.77
2017-11-27 16:05:14,336 [myid:] - INFO [main:Environment#100] - Server environment:java.version=1.8.0_131
2017-11-27 16:05:14,336 [myid:] - INFO [main:Environment#100] - Server environment:java.vendor=Oracle Corporation
2017-11-27 16:05:14,336 [myid:] - INFO [main:Environment#100] - Server environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre
2017-11-27 16:05:14,336 [myid:] - INFO [main:Environment#100] - Server environment:java.class.path=/Users/aniket.alhat/Tools/zookeeper/bin/../build/classes:/Users/aniket.alhat/Tools/zookeeper/bin/../build/lib/*.jar:/Users/aniket.alhat/Tools/zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/Users/aniket.alhat/Tools/zookeeper/bin/../lib/slf4j-api-1.6.1.jar:/Users/aniket.alhat/Tools/zookeeper/bin/../lib/netty-3.7.0.Final.jar:/Users/aniket.alhat/Tools/zookeeper/bin/../lib/log4j-1.2.16.jar:/Users/aniket.alhat/Tools/zookeeper/bin/../lib/jline-0.9.94.jar:/Users/aniket.alhat/Tools/zookeeper/bin/../zookeeper-3.4.6.jar:/Users/aniket.alhat/Tools/zookeeper/bin/../src/java/lib/*.jar:/Users/aniket.alhat/Tools/zookeeper/bin/../conf:
2017-11-27 16:05:14,336 [myid:] - INFO [main:Environment#100] - Server environment:java.library.path=/Users/aniket.alhat/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.
2017-11-27 16:05:14,336 [myid:] - INFO [main:Environment#100] - Server environment:java.io.tmpdir=/var/folders/9c/g5cj60_j1x344r3zpd_hr99j5jwnk4/T/
2017-11-27 16:05:14,336 [myid:] - INFO [main:Environment#100] - Server environment:java.compiler=<NA>
2017-11-27 16:05:14,337 [myid:] - INFO [main:Environment#100] - Server environment:os.name=Mac OS X
2017-11-27 16:05:14,337 [myid:] - INFO [main:Environment#100] - Server environment:os.arch=x86_64
2017-11-27 16:05:14,338 [myid:] - INFO [main:Environment#100] - Server environment:os.version=10.12.6
2017-11-27 16:05:14,338 [myid:] - INFO [main:Environment#100] - Server environment:user.name=aniket.alhat
2017-11-27 16:05:14,338 [myid:] - INFO [main:Environment#100] - Server environment:user.home=/Users/aniket.alhat
2017-11-27 16:05:14,338 [myid:] - INFO [main:Environment#100] - Server environment:user.dir=/Users/aniket.alhat/Tools/zookeeper-3.4.6
2017-11-27 16:05:14,344 [myid:] - INFO [main:ZooKeeperServer#755] - tickTime set to 2000
2017-11-27 16:05:14,344 [myid:] - INFO [main:ZooKeeperServer#764] - minSessionTimeout set to -1
2017-11-27 16:05:14,344 [myid:] - INFO [main:ZooKeeperServer#773] - maxSessionTimeout set to -1
2017-11-27 16:05:14,361 [myid:] - INFO [main:NIOServerCnxnFactory#94] - binding to port localhost/127.0.0.1:2181
And I also don't see any errors in nimbus log
2017-11-27 16:05:35.365 o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl [INFO] Starting
2017-11-27 16:05:35.373 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2017-11-27 16:05:35.373 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:host.name=10.49.48.134
2017-11-27 16:05:35.373 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:java.version=1.8.0_131
2017-11-27 16:05:35.373 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:java.vendor=Oracle Corporation
2017-11-27 16:05:35.373 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre
2017-11-27 16:05:35.373 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:java.class.path=/Users/aniket.alhat/Tools/apache-storm-1.0.2/lib/asm-5.0.3.jar:/Users/aniket.alhat/Tools/apache-storm-1.0.2/lib/clojure-1.7.0.jar:/Users/aniket.alhat/Tools/apache-storm-1.0.2/lib/disruptor-3.3.2.jar:/Users/aniket.alhat/Tools/apache-storm-1.0.2/lib/kryo-3.0.3.jar:/Users/aniket.alhat/Tools/apache-storm-1.0.2/lib/log4j-api-2.1.jar:/Users/aniket.alhat/Tools/apache-storm-1.0.2/lib/log4j-core-2.1.jar:/Users/aniket.alhat/Tools/apache-storm-1.0.2/lib/log4j-over-slf4j-1.6.6.jar:/Users/aniket.alhat/Tools/apache-storm-1.0.2/lib/log4j-slf4j-impl-2.1.jar:/Users/aniket.alhat/Tools/apache-storm-1.0.2/lib/minlog-1.3.0.jar:/Users/aniket.alhat/Tools/apache-storm-1.0.2/lib/objenesis-2.1.jar:/Users/aniket.alhat/Tools/apache-storm-1.0.2/lib/reflectasm-1.10.1.jar:/Users/aniket.alhat/Tools/apache-storm-1.0.2/lib/servlet-api-2.5.jar:/Users/aniket.alhat/Tools/apache-storm-1.0.2/lib/slf4j-api-1.7.7.jar:/Users/aniket.alhat/Tools/apache-storm-1.0.2/lib/storm-core-1.0.2.jar:/Users/aniket.alhat/Tools/apache-storm-1.0.2/lib/storm-rename-hack-1.0.2.jar:/Users/aniket.alhat/Tools/storm/conf
2017-11-27 16:05:35.373 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:java.library.path=/usr/local/lib:/opt/local/lib:/usr/lib
2017-11-27 16:05:35.373 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:java.io.tmpdir=/var/folders/9c/g5cj60_j1x344r3zpd_hr99j5jwnk4/T/
2017-11-27 16:05:35.373 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:java.compiler=<NA>
2017-11-27 16:05:35.373 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:os.name=Mac OS X
2017-11-27 16:05:35.373 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:os.arch=x86_64
2017-11-27 16:05:35.373 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:os.version=10.12.6
2017-11-27 16:05:35.373 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:user.name=aniket.alhat
2017-11-27 16:05:35.373 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:user.home=/Users/aniket.alhat
2017-11-27 16:05:35.373 o.a.s.s.o.a.z.ZooKeeper [INFO] Client environment:user.dir=/Users/aniket.alhat/Tools/apache-storm-1.0.2
2017-11-27 16:05:35.374 o.a.s.s.o.a.z.ZooKeeper [INFO] Initiating client connection, connectString=localhost:2181/storm sessionTimeout=20000 watcher=org.apache.storm.shade.org.apache.curator.ConnectionState#eac3a26
2017-11-27 16:05:35.397 o.a.s.s.o.a.z.ClientCnxn [INFO] Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2017-11-27 16:05:35.400 o.a.s.b.FileBlobStoreImpl [INFO] Creating new blob store based in storm-local/blobs
2017-11-27 16:05:35.406 o.a.s.d.nimbus [INFO] Using default scheduler
2017-11-27 16:05:35.408 o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl [INFO] Starting
2017-11-27 16:05:35.409 o.a.s.s.o.a.z.ZooKeeper [INFO] Initiating client connection, connectString=localhost:2181 sessionTimeout=20000 watcher=org.apache.storm.shade.org.apache.curator.ConnectionState#68868328
2017-11-27 16:05:35.411 o.a.s.s.o.a.z.ClientCnxn [INFO] Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2017-11-27 16:05:35.438 o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl [INFO] Starting
2017-11-27 16:05:35.438 o.a.s.s.o.a.z.ZooKeeper [INFO] Initiating client connection, connectString=localhost:2181 sessionTimeout=20000 watcher=org.apache.storm.shade.org.apache.curator.ConnectionState#512d6e60
2017-11-27 16:05:35.440 o.a.s.s.o.a.z.ClientCnxn [INFO] Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2017-11-27 16:05:35.478 o.a.s.s.o.a.z.ClientCnxn [INFO] Socket connection established to localhost/127.0.0.1:2181, initiating session
2017-11-27 16:05:35.478 o.a.s.s.o.a.z.ClientCnxn [INFO] Socket connection established to localhost/127.0.0.1:2181, initiating session
2017-11-27 16:05:35.479 o.a.s.s.o.a.z.ClientCnxn [INFO] Socket connection established to localhost/127.0.0.1:2181, initiating session
2017-11-27 16:05:35.513 o.a.s.s.o.a.z.ClientCnxn [INFO] Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15ffc4b4d950000, negotiated timeout = 20000
2017-11-27 16:05:35.513 o.a.s.s.o.a.z.ClientCnxn [INFO] Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15ffc4b4d950002, negotiated timeout = 20000
2017-11-27 16:05:35.513 o.a.s.s.o.a.z.ClientCnxn [INFO] Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15ffc4b4d950001, negotiated timeout = 20000
2017-11-27 16:05:35.517 o.a.s.s.o.a.c.f.s.ConnectionStateManager [INFO] State change: CONNECTED
2017-11-27 16:05:35.517 o.a.s.s.o.a.c.f.s.ConnectionStateManager [INFO] State change: CONNECTED
2017-11-27 16:05:35.517 o.a.s.s.o.a.c.f.s.ConnectionStateManager [INFO] State change: CONNECTED
2017-11-27 16:05:35.518 o.a.s.zookeeper [INFO] Zookeeper state update: :connected:none
2017-11-27 16:05:35.518 o.a.s.zookeeper [INFO] Zookeeper state update: :connected:none
2017-11-27 16:05:35.531 o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl [INFO] backgroundOperationsLoop exiting
2017-11-27 16:05:35.534 o.a.s.s.o.a.z.ZooKeeper [INFO] Session: 0x15ffc4b4d950002 closed
2017-11-27 16:05:35.534 o.a.s.s.o.a.z.ClientCnxn [INFO] EventThread shut down
2017-11-27 16:05:35.536 o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl [INFO] Starting
2017-11-27 16:05:35.536 o.a.s.s.o.a.z.ZooKeeper [INFO] Initiating client connection, connectString=localhost:2181/storm sessionTimeout=20000 watcher=org.apache.storm.shade.org.apache.curator.ConnectionState#3722c145
I would really appreciate if I could get some help to fix the timeout issue.
Check if the Nimbus is started correctly. I faced similar issue when an instance of Nimbus was not terminated correctly.
Try to kill the process and restart Nimbus.
After lot of trial-and-error I discovered that my Nimbus process gets started with a IP address 10.9.157.77 while ifconfig gives me 10.49.52.97 not sure why/how this is happening, I'll really appreciate if someone can help me figure it out.
nimbus.log
2017-11-30 16:47:00.342 o.a.s.zookeeper [INFO] 10.9.157.77 gained leadership, checking if it has all the topology code locally.
2017-11-30 16:47:00.350 o.a.s.zookeeper [INFO] active-topology-ids [] local-topology-ids [] diff-topology []
2017-11-30 16:47:00.350 o.a.s.zookeeper [INFO] Accepting leadership, all active topology found localy.
2017-11-30 16:47:00.352 o.a.s.d.m.MetricsUtils [INFO] Using statistics reporter plugin:org.apache.storm.daemon.metrics.reporters.JmxPreparableReporter
ifconfig
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether xx:xx:xx:xx:xx:xx
inet6 fe80::427:8998:bb4d:b2bd%en0 prefixlen 64 secured scopeid 0x4
inet 10.49.52.97 netmask 0xfffffc00 broadcast 10.49.55.255
Continuing forward I found everytime I was starting the nimbus process IP address 10.9.157.77 was been acquired magically and was stored in zookeeper as well.
[zk: localhost:2181(CONNECTED) 15] ls /storm/nimbuses
[10.9.157.77:6627]
I cleaned /storm directory with rmr and restart nimbus creating the directory once again, but there was no change.
I also tried flushing DNS cache, command used was sudo killall -HUP mDNSResponder
I also observed that the IP magical IP wasn't same after restarts, it changed to 10.49.48.134
2017-11-30 17:28:59.630 o.a.s.zookeeper [INFO] 10.49.48.134 gained leadership, checking if it has all the topology code locally.
2017-11-30 17:28:59.646 o.a.s.zookeeper [INFO] active-topology-ids [] local-topology-ids [] diff-topology []
2017-11-30 17:28:59.646 o.a.s.zookeeper [INFO] Accepting leadership, all active topology found localy.
Later I disconnected from Wifi and started everything once again and I was able to start Storm UI run command storm list deploy topology locally.
You can add storm.local.hostname= at your storm/conf/storm.yaml, and restart. Also work with IPv4/FQDN and not IPv6. This worked for me (same Storm 1.0.2)
If there are still problems, you can also add nimbus.seeds= with the Nimbus's host.

Deploy on Heroku using Scalatra

I am trying to deploy my Scalatra web application in heroku but I am having one problem.
My application works in local with SBT and using "heroku local web". I am using heroku sbt plugin.
When I use "sbt stage deployHeroku" the application is uploaded and started properly, obtaining:
user#user-X550JF:~/Documents/SOFT/cloudrobe$ sbt stage deployHeroku
Detected sbt version 0.13.9
....
....
[info] Packaging /home/user/Documents/SOFT/cloudrobe/target/scala-2.11/cloudrobe_2.11-0.1.0-SNAPSHOT.war ...
[info] Done packaging.
[success] Total time: 2 s, completed May 25, 2016 1:04:51 AM
[info] -----> Packaging application...
[info] - app: cloudrobe
[info] - including: target/universal/stage/
[info] -----> Creating build...
[info] - file: target/heroku/slug.tgz
[info] - size: 45MB
[info] -----> Uploading slug... (100%)
[info] - success
[info] -----> Deploying...
[info] remote:
[info] remote: -----> Fetching set buildpack https://codon-buildpacks.s3.amazonaws.com/buildpacks/heroku/jvm-common.tgz... done
[info] remote: -----> sbt-heroku app detected
[info] remote: -----> Installing OpenJDK 1.8... done
[info] remote:
[info] remote: -----> Discovering process types
[info] remote: Procfile declares types -> web
[info] remote:
[info] remote: -----> Compressing...
[info] remote: Done: 93.5M
[info] remote: -----> Launching...
[info] remote: Released v11
[info] remote: https://cloudrobe.herokuapp.com/ deployed to Heroku
[info] remote:
[info] -----> Done
___________________________________________________________________________
Using "heroku logs" I can see:
2016-05-24T23:14:16.007200+00:00 app[web.1]: 23:14:16.006 [main] INFO org.mongodb.driver.connection - Opened connection [connectionId{localValue:5, serverValue:5}] to localhost:33333
2016-05-24T23:14:16.370324+00:00 app[web.1]: 23:14:16.370 [main] INFO o.f.s.servlet.ServletTemplateEngine - Scalate template engine using working directory: /tmp/scalate-5146893161861816095-workdir
2016-05-24T23:14:16.746719+00:00 app[web.1]: 23:14:16.746 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.w.WebAppContext#7a356a0d{/,file:/app/src/main/webapp,AVAILABLE}
2016-05-24T23:14:16.782745+00:00 app[web.1]: 23:14:16.782 [main] INFO o.e.jetty.server.ServerConnector - Started ServerConnector#7dc51783{HTTP/1.1}{0.0.0.0:8080}
2016-05-24T23:14:16.782924+00:00 app[web.1]: 23:14:16.782 [main] INFO org.eclipse.jetty.server.Server - Started #6674ms
But, 5 or 10 seconds later appears the following error showing that the connection has been timed out:
2016-05-24T23:52:32.962896+00:00 heroku[router]: at=error code=H20 desc="App boot timeout" method=GET path="/" host=cloudrobe.herokuapp.com request_id=a7f68d98-54a2-44b7-8f5f-47efce0f1833 fwd="52.90.128.17" dyno= connect= service= status=503 bytes=
2016-05-24T23:52:45.463575+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
This is my Procfile using the port 5000:
web: target/universal/stage/bin/cloudrobe -Dhttp.address=127.0.0.1
Thank you.
Your app is binding to port 8080, but it needs to bind to the port set as the $PORT environment variable on Heroku. To do this, you need to add -Dhttp.port=$PORT to your Procfile. It also needs to bind to 0.0.0.0 and not 127.0.0.1. So it might look like this:
web: target/universal/stage/bin/cloudrobe -Dhttp.address=0.0.0.0 -Dhttp.port=$PORT

How to debug actor timeout reactivemongo api cursor fails to send request logback hell

Scala Play app with reactive mongo akka actors
sbt run:
I occasionaly get this error like ten times on sbt console:
It usually happens after a recompile and play reload, otherwise it happens less often, it doesn't cause anything to fail, sometimes page gives error.
[error] r.api.Cursor - fails to send request
java.util.concurrent.TimeoutException: Futures timed out after [3000 milliseconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) ~[scala-library-2.11.7.jar:na]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:153) ~[scala-library-2.11.7.jar:na]
at scala.concurrent.Await$$anonfun$ready$1.apply(package.scala:169) ~[scala-library-2.11.7.jar:na]
at scala.concurrent.Await$$anonfun$ready$1.apply(package.scala:169) ~[scala-library-2.11.7.jar:na]
at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread$$anon$3.block(ThreadPoolBuilder.scala:169) ~[akka-actor_2.11-2.3.13.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool.managedBlock(ForkJoinPool.java:3640) [scala-library-2.11.7.jar:na]
at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread.blockOn(ThreadPoolBuilder.scala:167) ~[akka-actor_2.11-2.3.13.jar:na]
at scala.concurrent.Await$.ready(package.scala:169) ~[scala-library-2.11.7.jar:na]
at reactivemongo.api.DefaultCursor$Impl$$anonfun$awaitFailover$1.apply(cursor.scala:474) ~[reactivemongo_2.11-0.11.10.jar:0.11.10]
at reactivemongo.api.DefaultCursor$Impl$$anonfun$awaitFailover$1.apply(cursor.scala:474) ~[reactivemongo_2.11-0.11.10.jar:0.11.10]
Also When I start the app (connect to reactivemongo driver mongodb) I get this output:
[info] r.c.actors.MongoDBSystem - The node set is now authenticated
[info] r.c.actors.MongoDBSystem - The node set is now authenticated
[info] r.c.actors.MongoDBSystem - The primary is now authenticated
[info] r.c.actors.MongoDBSystem - The primary is now authenticated
[info] r.c.actors.MongoDBSystem - The node set is now authenticated
[info] r.c.actors.MongoDBSystem - The node set is now authenticated
[info] r.c.actors.MongoDBSystem - The primary is now authenticated
[info] r.c.actors.MongoDBSystem - The primary is now authenticated
[info] r.c.actors.MongoDBSystem - The node set is now authenticated
[info] r.c.actors.MongoDBSystem - The node set is now authenticated
[info] r.c.actors.MongoDBSystem - The primary is now authenticated
[info] r.c.actors.MongoDBSystem - The primary is now authenticated
[info] r.c.actors.MongoDBSystem - The node set is now authenticated
[info] r.c.actors.MongoDBSystem - The node set is now authenticated
[info] r.c.actors.MongoDBSystem - The primary is now authenticated
[info] r.c.actors.MongoDBSystem - The primary is now authenticated
[info] r.c.actors.MongoDBSystem - The node set is now authenticated
[info] r.c.actors.MongoDBSystem - The node set is now authenticated
[info] r.c.actors.MongoDBSystem - The primary is now authenticated
[info] r.c.actors.MongoDBSystem - The primary is now authenticated
[info] r.c.actors.MongoDBSystem - The node set is now authenticated
[info] r.c.actors.MongoDBSystem - The node set is now authenticated
[info] r.c.actors.MongoDBSystem - The primary is now authenticated
[info] r.c.actors.MongoDBSystem - The primary is now authenticated
[info] r.c.actors.MongoDBSystem - The node set is now authenticated
[info] r.c.actors.MongoDBSystem - The node set is now authenticated
[info] r.c.actors.MongoDBSystem - The primary is now authenticated
[info] r.c.actors.MongoDBSystem - The primary is now authenticated
[info] r.c.actors.MongoDBSystem - The node set is now authenticated
[info] r.c.actors.MongoDBSystem - The node set is now authenticated
[info] r.c.actors.MongoDBSystem - The primary is now authenticated
[info] r.c.actors.MongoDBSystem - The primary is now authenticated
[info] r.c.actors.MongoDBSystem - The node set is now authenticated
[info] r.c.actors.MongoDBSystem - The node set is now authenticated
[info] r.c.actors.MongoDBSystem - The primary is now authenticated
[info] r.c.actors.MongoDBSystem - The primary is now authenticated
[info] r.c.actors.MongoDBSystem - The node set is now authenticated
[info] r.c.actors.MongoDBSystem - The node set is now authenticated
[info] r.c.actors.MongoDBSystem - The primary is now authenticated
[info] r.c.actors.MongoDBSystem - The primary is now authenticated
why so much?
how do i debug this kind of stuff.
Edit:
I connect to mongolab.com mongodb so i tried this:
val defaultStrategy = FailoverStrategy()
val customStrategy =
FailoverStrategy(
delayFactor = attempt => 10
)
// database-wide strategy
val db = connection.db("dbname", customStrategy)
but i saw a timeout of 10500! network is fine, omg what is wrong with this reactivemongo?
I have this:
val RM = "org.reactivemongo" %% "reactivemongo" % "0.11.10"
val PRM = "org.reactivemongo" %% "play2-reactivemongo" % "0.11.10"
What does this error mean (it doesn't crash the app or anything why), and what kind of code causes this?
I have the same problem, which I could steadily reproduce while testing.
I tried switching to Reactivemongo fresh release 0.11.11 the exception stopped, so as whole system performance went down.
My final solution is - increase number of connections, and change failover strategy
val mongoDriver = new MongoDriver(Some(configuration.underlying))
Runtime.getRuntime().addShutdownHook(new Thread() {
override def run() = {
mongoDriver.close()
}
})
val connectionOptions = MongoConnectionOptions(
nbChannelsPerNode = 40,
connectTimeoutMS = 5000
)
val mongoConnection: MongoConnection = mongoDriver.connection(List(mongoHost), connectionOptions)
val customStrategy = FailoverStrategy(
retries = 8,
delayFactor = n => n * 1.5
)
mongoConnection(mongoDB, customStrategy)
Now I don't see them, or they are harder to reproduce.