Cannot connect to Cassandra using Astyanax - scala

I am running latest version of cassandra on my mac inside docker. I have installed cassandra dev center on my mac and I can connect to cassandra using devcenter and I can easily query my tables.
http://i.imgur.com/48MztBM.png
http://i.imgur.com/jPKblly.png
I can see that all required ports for cassandra are also open. (as shown below)
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e698e61e3bac cassandra:latest "/docker-entrypoint.s" 9
minutes ago Up 9 minutes 0.0.0.0:7000->7000/tcp, 0.0.0.0:7199-
>7199/tcp, 0.0.0.0:9042->9042/tcp, 0.0.0.0:9160->9160/tcp, 7001/tcp cassandra
I also logged in, into my cassandra container and executed the following commands
root#e698e61e3bac:/# nodetool -h 127.0.0.1 enablethrift
root#e698e61e3bac:/# nodetool -h 127.0.0.1 statusthrift
running
However when I run the code below
val configImpl = new AstyanaxConfigurationImpl()
configImpl.setDiscoveryType(NodeDiscoveryType.RING_DESCRIBE)
configImpl.setCqlVersion("3.4.2")
configImpl.setTargetCassandraVersion("3.7")
val poolConfig = new ConnectionPoolConfigurationImpl("MyConnectionPool")
poolConfig.setPort(9160)
poolConfig.setMaxConnsPerHost(1)
poolConfig.setSeeds("192.168.1.169")
val context = new AstyanaxContext.Builder()
.forCluster("localhost")
.forKeyspace("movielens_small")
.withAstyanaxConfiguration(configImpl)
.withConnectionPoolConfiguration(poolConfig)
.withConnectionPoolMonitor(new CountingConnectionPoolMonitor())
.buildKeyspace(ThriftFamilyFactory.getInstance())
context.start()
val keyspace = context.getClient()
val cf = new ColumnFamily[UUID, String]("cf", UUIDSerializer.get, StringSerializer.get())
val result = keyspace.prepareQuery(cf).withCql("select name from movies").execute()
val data = result.getResult.getRows()
for {
row <- data
col <- row.getColumns
} {
println(col)
}
context.shutdown()
I get the following error
07:32:10,606 INFO ConnectionPoolMBeanManager:53 - Registering mbean: com.netflix.MonitoredResources:type=ASTYANAX,name=MyConnectionPool,ServiceType=connectionpool
07:32:10,625 INFO CountingConnectionPoolMonitor:194 - AddHost: 192.168.1.169
07:32:10,748 INFO CountingConnectionPoolMonitor:194 - AddHost: 172.17.0.2
07:32:10,748 INFO CountingConnectionPoolMonitor:205 - RemoveHost: 192.168.1.169
[error] (run-main-0) com.netflix.astyanax.connectionpool.exceptions.PoolTimeoutException: PoolTimeoutException: [host=172.17.0.2(172.17.0.2):9160, latency=2003(2003), attempts=1]Timed out waiting for connection
com.netflix.astyanax.connectionpool.exceptions.PoolTimeoutException: PoolTimeoutException: [host=172.17.0.2(172.17.0.2):9160, latency=2003(2003), attempts=1]Timed out waiting for connection
at com.netflix.astyanax.connectionpool.impl.SimpleHostConnectionPool.waitForConnection(SimpleHostConnectionPool.java:231)
at com.netflix.astyanax.connectionpool.impl.SimpleHostConnectionPool.borrowConnection(SimpleHostConnectionPool.java:198)
at com.netflix.astyanax.connectionpool.impl.RoundRobinExecuteWithFailover.borrowConnection(RoundRobinExecuteWithFailover.java:84)
at com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:117)
at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:352)
at com.netflix.astyanax.thrift.AbstractThriftCqlQuery.execute(AbstractThriftCqlQuery.java:41)
at com.abhi.CassandraScanner$.delayedEndpoint$com$abhi$CassandraScanner$1(CassandraScanner.scala:41)
at com.abhi.CassandraScanner$delayedInit$body.apply(CassandraScanner.scala:19)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App.$anonfun$main$1$adapted(App.scala:76)
at scala.collection.immutable.List.foreach(List.scala:378)
at scala.App.main(App.scala:76)
at scala.App.main$(App.scala:74)

I was able to solve the problem. The problem was with the NodeDiscoveryType. I changed my code to NodeDiscoveryType.NONE and it worked perfectly. Although I still don't understand why its mandatory for the NodeDiscoveryType to be NULL and why it won't work with RING_DESCRIBE.
Hopefully someone will elaborate further.

Related

How to connect to Mongo DB in Docker from Scala Play Framework?

I have a Mongo DB Docker container running on 192.168.0.229. From another computer, I can access it via:
> mongo "mongodb://192.168.0.229:27017/test"
But when I add that configuration string (host="192.168.0.229") to my Play Framework app, I get a timeout error:
[debug] application - Login Form Success: UserData(larry#gmail.com,testPW)
[error] p.a.h.DefaultHttpErrorHandler -
! #7m7kggikl - Internal server error, for (POST) [/] ->
play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[TimeoutException: Future timed out after [30 seconds]]]
In the past, the connection was successful with host="localhost" and even an Atlas cluster (host="mycluster.gepld.mongodb.net") for the hostname, so there were no problems connecting previously with the same code. For some reason, Play Framework does not want to connect to this endpoint!
Could it be because the hostname is an IP address? Or, maybe Play/ Akka is doing something under the covers to stop the connection (or something to make Mongo/Docker refuse to accept it?)?
I'm using this driver:
"org.mongodb.scala" %% "mongo-scala-driver" % "4.4.0"
Perhaps I should switch to the Reactive Scala Driver? Any help would be appreciated.
Clarifications:
The Mongo DB Docker container is running on a linux machine on my local network. This container is reachable from within my local network at 192.168.0.229. The goal is to set my Play Framework app configuration to point to the DB at this address, so that as long as the Docker container is running, I can develop from any computer on my local network. Currently, I am able to access the container through the mongo shell on any computer:
> mongo "mongodb://192.168.0.229:27017/test"
I have a Play Framework app with the following in the Application.conf:
datastore {
# Dev
host: "192.168.0.229"
port: 27017
dbname: "test"
user: ""
password: ""
}
This data is used in a connection helper file called DataStore.scala:
package model.db.mongo
import org.mongodb.scala._
import utils.config.AppConfiguration
trait DataStore extends AppConfiguration {
lazy val dbHost = config.getString("datastore.host")
lazy val dbPort = config.getInt("datastore.port")
lazy val dbUser = getConfigString("datastore.user", "")
lazy val dbName = getConfigString("datastore.dbname", "")
lazy val dbPasswd = getConfigString("datastore.password", "")
//MongoDB Atlas Method (Localhost if DB User is empty)
val uri: String = s"mongodb+srv://$dbUser:$dbPasswd#$dbHost/$dbName?retryWrites=true&w=majority"
//val uri: String = "mongodb+svr://192.168.0.229:27017/?compressors=disabled&gssapiServiceName=mongodb"
System.setProperty("org.mongodb.async.type", "netty")
val mongoClient: MongoClient = if (getConfigString("datastore.user", "").isEmpty()) MongoClient() else MongoClient(uri)
print(mongoClient.toString)
print(mongoClient.listDatabaseNames())
val database: MongoDatabase = mongoClient.getDatabase(dbName)
def close = mongoClient.close() //Do this when logging out
}
When you start the app, you open localhost:9000 which is simply a login form. When you fill out the data that corresponds with the data in the users collection, the Play app times out:
[error] p.a.h.DefaultHttpErrorHandler -
! #7m884abc4 - Internal server error, for (POST) [/] ->
play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[TimeoutException: Future timed out after [30 seconds]]]
at play.api.http.HttpErrorHandlerExceptions$.$anonfun$convertToPlayException$2(HttpErrorHandler.scala:381)
at scala.Option.map(Option.scala:242)
at play.api.http.HttpErrorHandlerExceptions$.convertToPlayException(HttpErrorHandler.scala:380)
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:373)
at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:264)
at play.core.server.AkkaHttpServer$$anonfun$2.applyOrElse(AkkaHttpServer.scala:430)
at play.core.server.AkkaHttpServer$$anonfun$2.applyOrElse(AkkaHttpServer.scala:422)
at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:454)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:63)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:100)
Caused by: java.util.concurrent.TimeoutException: Future timed out after [30 seconds]
at scala.concurrent.impl.Promise$DefaultPromise.tryAwait0(Promise.scala:212)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:225)
at scala.concurrent.Await$.$anonfun$result$1(package.scala:201)
at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread$$anon$3.block(ThreadPoolBuilder.scala:174)
at java.base/java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3118)
at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread.blockOn(ThreadPoolBuilder.scala:172)
at akka.dispatch.BatchingExecutor$BlockableBatch.blockOn(BatchingExecutor.scala:116)
at scala.concurrent.Await$.result(package.scala:124)
at model.db.mongo.DataHelpers$ImplicitObservable.headResult(DataHelpers.scala:27)
at model.db.mongo.DataHelpers$ImplicitObservable.headResult$(DataHelpers.scala:27)
The call to the Users collection is defined in UserAccounts.scala:
case class UserAccount(_id: String, fullname: String, username: String, password: String)
object UserAccount extends DataStore {
val logger: Logger = Logger("database")
//Required for using Case Classes
val codecRegistry = fromRegistries(fromProviders(classOf[UserAccount]), DEFAULT_CODEC_REGISTRY)
//Using Case Class to get a collection
val coll: MongoCollection[UserAccount] = database.withCodecRegistry(codecRegistry).getCollection("users")
//Using Document to get a collection
val listings: MongoCollection[Document] = database.getCollection("users")
def isValidLogin(username: String, password: String): Boolean = {
findUser(username) match {
case Some(u: UserAccount) => if(password.equals(u.password)) { true } else {false }
case None => false
}
}
Just an FYI if anyone runs into this problem. I had a bad line in my DataStore.scala file:
val mongoClient: MongoClient = if (getConfigString("datastore.user", "").isEmpty()) MongoClient() else MongoClient(uri)
Since I was trying to connect without a username (there's no auth on my test db), the above line was saying, "If there's no username, you must be trying to connect to the default MongoClient() location, which is localhost". My mistake.
I simply changed the above line to this:
val mongoClient: MongoClient = MongoClient(uri)

java.lang.NoClassDefFoundError: Could not initialize class XXXXXXXX in scala spark

I have written the scala-spark code to build my project and IDE is IntelliJ and it was showing this error while running it on AWS EMR cluster and working fine on the local.
It was cracking at below commented line:
var join_sql="select ipfile.id,ipfile.col1,opfile.col2 from ipfile join opfile on ipfile.id=opfile.id"
var df1=Operation.spark.sql(join_sql)
df1.createOrReplaceTempView("df1")
var df2 = df1.groupBy("col1","col2").count()
df2.createOrReplaceTempView("df2")
df2=Operation.spark.sql("select * from df2 order by count desc")
print("count : ",df2.count())
try {
df2.foreach(t => {
impact=t.getAs[Long]("impact").toString // Job was aborting at this particular line
m1 = t.getAs[String]("col1")
m2=t.getAs[String]("col2")
print("m1" + "m2" )
})
When I created the jar through sbt assembly to run it on the local mode, it was working fine but when I created the jar for yarn-client and executing that on cluster mode, it was showing this error.

CDC with WSO2 Streaming Integrator and Postgres DB

I am trying to setup Change Data Capture (CDC) between WSO2 Streaming Integrator and a local Postgres DB.
I have added the Postgres Driver (v42.2.5) to SI_HOME/lib and I am able to read data from the database from a Siddhi application.
I am following the CDCWithListeningMode example to implement CDC and I am using pgoutput as the logical decoding plugin. But when I run the application I get the following log.
[2020-04-23_19-02-37_460] INFO {org.apache.kafka.connect.json.JsonConverterConfig} - JsonConverterConfig values:
converter.type = key
schemas.cache.size = 1000
schemas.enable = true
[2020-04-23_19-02-37_461] INFO {org.apache.kafka.connect.json.JsonConverterConfig} - JsonConverterConfig values:
converter.type = value
schemas.cache.size = 1000
schemas.enable = false
[2020-04-23_19-02-37_461] INFO {io.debezium.embedded.EmbeddedEngine$EmbeddedConfig} - EmbeddedConfig values:
access.control.allow.methods =
access.control.allow.origin =
bootstrap.servers = [localhost:9092]
header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter
internal.key.converter = class org.apache.kafka.connect.json.JsonConverter
internal.value.converter = class org.apache.kafka.connect.json.JsonConverter
key.converter = class org.apache.kafka.connect.json.JsonConverter
listeners = null
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
offset.flush.interval.ms = 60000
offset.flush.timeout.ms = 5000
offset.storage.file.filename =
offset.storage.partitions = null
offset.storage.replication.factor = null
offset.storage.topic =
plugin.path = null
rest.advertised.host.name = null
rest.advertised.listener = null
rest.advertised.port = null
rest.host.name = null
rest.port = 8083
ssl.client.auth = none
task.shutdown.graceful.timeout.ms = 5000
value.converter = class org.apache.kafka.connect.json.JsonConverter
[2020-04-23_19-02-37_516] INFO {io.debezium.connector.common.BaseSourceTask} - offset.storage = io.siddhi.extension.io.cdc.source.listening.InMemoryOffsetBackingStore
[2020-04-23_19-02-37_517] INFO {io.debezium.connector.common.BaseSourceTask} - database.server.name = localhost_5432
[2020-04-23_19-02-37_517] INFO {io.debezium.connector.common.BaseSourceTask} - database.port = 5432
[2020-04-23_19-02-37_517] INFO {io.debezium.connector.common.BaseSourceTask} - table.whitelist = SweetProductionTable
[2020-04-23_19-02-37_517] INFO {io.debezium.connector.common.BaseSourceTask} - cdc.source.object = 1716717434
[2020-04-23_19-02-37_517] INFO {io.debezium.connector.common.BaseSourceTask} - database.hostname = localhost
[2020-04-23_19-02-37_518] INFO {io.debezium.connector.common.BaseSourceTask} - database.password = ********
[2020-04-23_19-02-37_518] INFO {io.debezium.connector.common.BaseSourceTask} - name = CDCWithListeningModeinsertSweetProductionStream
[2020-04-23_19-02-37_518] INFO {io.debezium.connector.common.BaseSourceTask} - server.id = 6140
[2020-04-23_19-02-37_519] INFO {io.debezium.connector.common.BaseSourceTask} - database.history = io.debezium.relational.history.FileDatabaseHistory
[2020-04-23_19-02-38_103] INFO {io.debezium.connector.postgresql.PostgresConnectorTask} - user 'user_name' connected to database 'db_name' on PostgreSQL 11.5, compiled by Visual C++ build 1914, 64-bit with roles:
role 'user_name' [superuser: false, replication: true, inherit: true, create role: false, create db: false, can log in: true] (Encoded)
[2020-04-23_19-02-38_104] INFO {io.debezium.connector.postgresql.PostgresConnectorTask} - No previous offset found
[2020-04-23_19-02-38_104] INFO {io.debezium.connector.postgresql.PostgresConnectorTask} - Taking a new snapshot of the DB and streaming logical changes once the snapshot is finished...
[2020-04-23_19-02-38_105] INFO {io.debezium.util.Threads} - Requested thread factory for connector PostgresConnector, id = localhost_5432 named = records-snapshot-producer
[2020-04-23_19-02-38_105] INFO {io.debezium.util.Threads} - Requested thread factory for connector PostgresConnector, id = localhost_5432 named = records-stream-producer
[2020-04-23_19-02-38_293] INFO {io.debezium.connector.postgresql.connection.PostgresConnection} - Obtained valid replication slot ReplicationSlot [active=false, latestFlushedLSN=null]
[2020-04-23_19-02-38_704] ERROR {io.siddhi.core.stream.input.source.Source} - Error on 'CDCWithListeningMode'. Connection to the database lost. Error while connecting at Source 'cdc' at 'insertSweetProductionStream'. Will retry in '5 sec'. (Encoded)
io.siddhi.core.exception.ConnectionUnavailableException: Connection to the database lost.
at io.siddhi.extension.io.cdc.source.CDCSource.lambda$connect$1(CDCSource.java:424)
at io.debezium.embedded.EmbeddedEngine.run(EmbeddedEngine.java:793)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.kafka.connect.errors.ConnectException: Cannot create replication connection
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.(PostgresReplicationConnection.java:87)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.(PostgresReplicationConnection.java:38)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$ReplicationConnectionBuilder.build(PostgresReplicationConnection.java:362)
at io.debezium.connector.postgresql.PostgresTaskContext.createReplicationConnection(PostgresTaskContext.java:65)
at io.debezium.connector.postgresql.RecordsStreamProducer.(RecordsStreamProducer.java:81)
at io.debezium.connector.postgresql.RecordsSnapshotProducer.(RecordsSnapshotProducer.java:70)
at io.debezium.connector.postgresql.PostgresConnectorTask.createSnapshotProducer(PostgresConnectorTask.java:133)
at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:86)
at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:45)
at io.debezium.embedded.EmbeddedEngine.run(EmbeddedEngine.java:677)
... 3 more
Caused by: io.debezium.jdbc.JdbcConnectionException: ERROR: could not access file "decoderbufs": No such file or directory
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.initReplicationSlot(PostgresReplicationConnection.java:145)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.(PostgresReplicationConnection.java:79)
... 12 more
Caused by: org.postgresql.util.PSQLException: ERROR: could not access file "decoderbufs": No such file or directory
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2440)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2183)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:308)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:266)
at org.postgresql.replication.fluent.logical.LogicalCreateSlotBuilder.make(LogicalCreateSlotBuilder.java:48)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.initReplicationSlot(PostgresReplicationConnection.java:108)
... 13 more
Debezium defaults to decoderbufs plugin - "could not access file "decoderbufs": No such file or directory".
According to this answer, the issue is due to the configuration of decoderbufs plugin.
Details
Postgres - 11.4
siddhi-cdc-io - 2.0.3
Debezium - 0.8.3
How do I configure the embedded debezium engine to use the pgoutput plugin? Will changing this configuration fix the error?
Please help me with this issue. I have not found any resources that can help me.
you either need to update the Debezium to the latest 1.1 version - this will enable you to use pgoutput plugin using plugin.name config option or you need to deploy (and maybe build) decoderbufs.so library to your PostgreSQL database.
I'd recommend the former as 0.8.3 is very old version.
I observed this behavior with PostgreSQL 12 when I tried to do CDC with pgoutput logical decoding output plug-in. It seems like even though I configured the database with pgoutput, the siddhi extension is trying to make the connection using "decoderbufs" as decoding plug-in.
When I tried configuring decoderbufs as the logical decoding output plug-in in the database level, I was able to use siddhi io extension without any issue.
It seems like for now, Siddhi io CDC only supports decoderbufs logical decoding output plug-in with PostgreSQL.

Connecting to DSE Graph running on a Docker Container results into No host found

I am running my DSE Graph inside a Docker container with this command.
docker run -e DS_LICENSE=accept -p 9042:9042 --name my-dse -d datastax/dse-server -g -k -s
On my Scala Code I am referencing it as follows
object GrDbConnection {
val dseCluster = DseCluster.builder()
.addContactPoints("127.0.0.1").withPort(9042)
.build()
val graphName = "graphName"
val graphOptions = new GraphOptions()
.setGraphName(graphName)
var graph:ScalaGraph = null
try {
val session = dseCluster.connect()
// The following uses the DSE graph schema API, which is currently only supported by the string-based
// execution interface. Eventually there will be a programmatic API for making schema changes, but until
// then this needs to be used.
// Create graph
session.executeGraph("system.graph(name).ifNotExists().create()", ImmutableMap.of("name", graphName))
// Clear the schema to drop any existing data and schema
session.executeGraph(new SimpleGraphStatement("schema.clear()").setGraphName(graphName))
// Note: typically you would not want to use development mode and allow scans, but it is good for convenience
// and experimentation during development.
// Enable development mode and allow scans
session.executeGraph(new SimpleGraphStatement("schema.config().option('graph.schema_mode').set('development')")
.setGraphName(graphName))
session.executeGraph(new SimpleGraphStatement("schema.config().option('graph.allow_scan').set('true')")
.setGraphName(graphName))
// Create a ScalaGraph from a remote Traversal Source using withRemote
// See: http://tinkerpop.apache.org/docs/current/reference/#connecting-via-remotegraph for more details
val connection = DseRemoteConnection.builder(session)
.withGraphOptions(graphOptions)
.build()
graph = EmptyGraph.instance().asScala
.configure(_.withRemote(connection))
} finally {
dseCluster.close()
}
}
Then In one of my controllers I have used this to invoke a query to the DSE Graph.
def test = Action {
val r = GrDbConnection.graph.V().count()
print(r.iterate())
Ok
}
This returns an error
lay.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[NoHostAvailableException: All host(s) tried for query failed (no host was tried)]]
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:251)
at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:178)
at play.core.server.AkkaHttpServer$$anonfun$1.applyOrElse(AkkaHttpServer.scala:363)
at play.core.server.AkkaHttpServer$$anonfun$1.applyOrElse(AkkaHttpServer.scala:361)
at scala.concurrent.Future.$anonfun$recoverWith$1(Future.scala:413)
at scala.concurrent.impl.Promise.$anonfun$transformWith$1(Promise.scala:37)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)
at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:221)
at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:41)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:292)
at com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:109)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:89)
at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:124)
at com.datastax.driver.dse.DefaultDseSession.executeGraphAsync(DefaultDseSession.java:123)
at com.datastax.dse.graph.internal.DseRemoteConnection.submitAsync(DseRemoteConnection.java:74)
at org.apache.tinkerpop.gremlin.process.remote.traversal.step.map.RemoteStep.promise(RemoteStep.java:89)
at org.apache.tinkerpop.gremlin.process.remote.traversal.step.map.RemoteStep.processNextStart(RemoteStep.java:65)
Turns out all I needed to do was to remove
finally {
dseCluster.close()
}

How do I resolve a MongoDB timeout error when connecting via the Scala Play! framework?

I am connecting to MongoDB while using the Scala Play! framework. I end up getting this timeout error:
! #6j672dke5 - Internal server error, for (GET) [/accounts] ->
play.api.Application$$anon$1: Execution exception[[MongoTimeoutException: Timed out while waiting to connect after 10000 ms]]
at play.api.Application$class.handleError(Application.scala:293) ~[play_2.10-2.2.1.jar:2.2.1]
at play.api.DefaultApplication.handleError(Application.scala:399) [play_2.10-2.2.1.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:165) [play_2.10-2.2.1.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:162) [play_2.10-2.2.1.jar:2.2.1]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) [scala-library-2.10.4.jar:na]
at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185) [scala-library-2.10.4.jar:na]
Caused by: com.mongodb.MongoTimeoutException: Timed out while waiting to connect after 10000 ms
at com.mongodb.BaseCluster.getDescription(BaseCluster.java:131) ~[mongo-java-driver-2.12.3.jar:na]
at com.mongodb.DBTCPConnector.getClusterDescription(DBTCPConnector.java:396) ~[mongo-java-driver-2.12.3.jar:na]
at com.mongodb.DBTCPConnector.getType(DBTCPConnector.java:569) ~[mongo-java-driver-2.12.3.jar:na]
at com.mongodb.DBTCPConnector.isMongosConnection(DBTCPConnector.java:370) ~[mongo-java-driver-2.12.3.jar:na]
at com.mongodb.Mongo.isMongosConnection(Mongo.java:645) ~[mongo-java-driver-2.12.3.jar:na]
at com.mongodb.DBCursor._check(DBCursor.java:454) ~[mongo-java-driver-2.12.3.jar:na]
Here is my Scala code for connecting to the database:
//models.scala
package models.mongodb
//imports
package object mongoContext {
//context stuff
val client = MongoClient(current.configuration.getString("mongo.host").toString())
val database = client(current.configuration.getString("mongo.database").toString())
}
Here is the actual model that is making the connection:
//google.scala
package models.mongodb
//imports
case class Account(
id: ObjectId = new ObjectId,
name: String
)
object AccountDAO extends SalatDAO[Account, ObjectId](
collection = mongoContext.database("accounts")
)
object Account {
def all(): List[Account] = AccountDAO.find(MongoDBObject.empty).toList
}
Here's the Play! framework MongoDB conf information:
# application.conf
# mongodb connection details
mongo.host="localhost"
mongo.port=27017
mongo.database="advanced"
Mongodb is running on my local machine. I can connect to it by typing mongo at the terminal window. Here's the relevant part of the conf file:
# mongod.conf
# Where to store the data.
# Note: if you run mongodb as a non-root user (recommended) you may
# need to create and set permissions for this directory manually,
# e.g., if the parent directory isn't mutable by the mongodb user.
dbpath=/var/lib/mongodb
#where to log
logpath=/var/log/mongodb/mongod.log
logappend=true
#port = 27017
# Listen to local interface only. Comment out to listen on all interfaces.
#bind_ip = 127.0.0.1
So what's causing this timeout error and how do I fix it? Thanks!
I figured out that I needed to change:
val client = MongoClient(current.configuration.getString("mongo.host").toString())
val database = client(current.configuration.getString("mongo.database").toString())
to:
val client = MongoClient(conf.getString("mongo.host"))
val database = client(conf.getString("mongo.database"))