play slick database configuration issues with postgres driver - postgresql

I'm trying to set up a simple play 2.5 application with slick and postgres, but can't seem to get past an error.
The error I'm getting is
[error] p.a.d.s.DefaultSlickApi - Failed to create Slick database config for key default.
slick.SlickException: Error getting instance of profile "slick.jdbc.PostgresProfile"
...
Caused by: java.lang.InstantiationException: slick.jdbc.PostgresProfile
...
Caused by: java.lang.NoSuchMethodException: slick.jdbc.PostgresProfile.<init>()
...
I've got the following in my application.conf
slick.dbs.default {
driver = "slick.jdbc.PostgresProfile"
db = {
driver = "org.postgresql.Driver"
user = postgres
host = localhost
port = 5432
password = ""
host = ${?EVENTUAL_DB_HOST}
port = ${?EVENTUAL_DB_PORT}
user = ${?EVENTUAL_DB_USER}
password = ${?EVENTUAL_DB_PW}
url = "jdbc:postgresql://"${slick.dbs.default.db.host}":"${slick.dbs.default.db.port}"/"${slick.dbs.default.db.user}
}
}
and these in my dependencies
"com.typesafe.play" %% "play-slick" % "2.1.0",
"com.typesafe.slick" %% "slick-codegen" % "3.1.1",
"com.github.tminglei" %% "slick-pg" % "0.15.0-RC", //"0.14.6",
"org.postgresql" % "postgresql" % "42.0.0"
if I change slick.dbs.default.driver to slick.driver.PostgresDriver (which is now deprecated evidently) I get
[error] p.a.d.s.DefaultSlickApi - Failed to create Slick database config for key default.
slick.SlickException: Error getting instance of profile "slick.driver.PostgresDriver"
...
Caused by: java.lang.ClassNotFoundException: slick.driver.PostgresDriver
...
I'm about pulling my hair out here and can't find any other resources to look at. Does anyone have any idea what's going on?

Sure enough, by recommendation of insan-e, all I had to do was add a $. So slick.dbs.default.driver should be "slick.jdbc.PostgresProfile$".

Related

Elastic4S - Jackson's class ScalaObjectMapper isn't found and throws a NoSuchMethodError

I'm having an issue while using Elastic4S in a Scala project. The following error is thrown :
java.lang.NoSuchMethodError:
com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper.$init$(Lcom/fasterxml/jackson/module/scala/experimental/ScalaObjectMapper;)V
Followed by :
java.lang.NoClassDefFoundError: Could not initialize class
com.sksamuel.elastic4s.json.JacksonSupport$
Here are the dependencies used :
"com.sksamuel.elastic4s" %% "elastic4s-core" % "6.7.3",
"com.sksamuel.elastic4s" %% "elastic4s-http" % "6.7.3",
"com.github.swagger-akka-http" %% "swagger-akka-http" % "2.0.4",
"com.github.swagger-akka-http" %% "swagger-scala-module" % "2.0.5",
...
),
assemblyMergeStrategy in assembly := { _ => MergeStrategy.first }
And the only bit of code launched from Elastic4s is in this method :
def testClusterUp(log: LoggingAdapter): Unit = {
val response: Response[NodeInfoResponse] = client.execute(
nodeInfo()
).await
if (response.isError) {
log.error(s"[ERROR]-[ELASTICSEARCH] $response")
throw new ExceptionInInitializerError(s"an error occurred during Elastic connector initialization : ${response.error}")
} else if (response.isSuccess) {
log.info("Cluster started successfully !")
}
}
Any help would be appreciated
There seem to be some other dependencies that are covering this one. For eg, the swagger-akka-http uses another version of jackson-scala-module and can't be included along without some tweaking in the build.sbt. Here would be the configuration to put in such case :
("com.github.swagger-akka-http" %% "swagger-akka-http" % "2.0.4") excludeAll(ExclusionRule(organization = "com.fasterxml.jackson.module")),
("com.github.swagger-akka-http" %% "swagger-scala-module" % "2.0.5") excludeAll(ExclusionRule(organization = "com.fasterxml.jackson.module")),
see more about sbt dependencies' exclusion rules here :
https://www.scala-sbt.org/release/docs/Library-Management.html#Exclude+Transitive+Dependencies

Slick 3.0 Connection to MySQL with connection pool enabled

I have the following application.conf
mysql {
driver = "com.mysql.jdbc.Driver"
url = "jdbc:mysql://myserver:3306/mydb"
user = "foo"
password = "bar"
keepAliveConnection = true
connectionPool = enabled
}
when I do Database.forConfig("mysql") I get an exception
java.lang.ClassNotFoundException: enabled
at java.lang.ClassLoader.findClass(ClassLoader.java:530)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at slick.util.ClassLoaderUtil$$anon$1.loadClass(ClassLoaderUtil.scala:17)
at slick.jdbc.JdbcDataSource$.loadFactory$1(JdbcDataSource.scala:37)
at slick.jdbc.JdbcDataSource$.forConfig(JdbcDataSource.scala:46)
at slick.jdbc.JdbcBackend$DatabaseFactoryDef.forConfig(JdbcBackend.scala:288)
at slick.jdbc.JdbcBackend$DatabaseFactoryDef.forConfig$(JdbcBackend.scala:285)
at slick.jdbc.JdbcBackend$$anon$3.forConfig(JdbcBackend.scala:33)
at com.abhi.CodeGen$.delayedEndpoint$com$abhi$CodeGen$1(CodeGen.scala:25)
at com.abhi.CodeGen$delayedInit$body.apply(CodeGen.scala:15)
at scala.Function0.apply$mcV$sp(Function0.scala:34)
at scala.Function0.apply$mcV$sp$(Function0.scala:34)
If I change my config to
mysql {
driver = "com.mysql.jdbc.Driver"
url = "jdbc:mysql://myserver:3306/mydb"
user = "foo"
password = "bar"
keepAliveConnection = true
connectionPool = disabled
}
Then everything works.
But why do I get this wierd error message if I try to establish a connection with connection pool enabled?
Edit: This is my build.sbt
"com.typesafe.slick" %% "slick" % "3.2.0",
"com.typesafe.slick" %% "slick-codegen" % "3.2.0",
"mysql" % "mysql-connector-java" % "5.1.38",
"com.zaxxer" % "HikariCP" % "2.6.3"
Probably you haven't dependency in classpath for some jar with pool implementation, such as Hikari. Possible duplicate of Setting connectionPool crashes Slick 3.0

Spark Streaming Context Hangs on Start

All,
I am trying to use Kinesis with Spark Streaming on Spark 1.6.0 via Databricks and my ssc.start() command is hanging.
I am using the following function to make my Spark Streaming Context:
def creatingFunc(sc: SparkContext): StreamingContext =
{
// Create a StreamingContext
val ssc = new StreamingContext(sc, Seconds(batchIntervalSeconds))
// Creata a Kinesis stream
val kinesisStream = KinesisUtils.createStream(ssc,
kinesisAppName, kinesisStreamName,
kinesisEndpointUrl, RegionUtils.getRegionByEndpoint(kinesisEndpointUrl).getName,
InitialPositionInStream.LATEST, Seconds(kinesisCheckpointIntervalSeconds),
StorageLevel.MEMORY_AND_DISK_SER_2, config.awsAccessKeyId, config.awsSecretKey)
kinesisStream.print()
ssc.remember(Minutes(1))
ssc.checkpoint(checkpointDir)
ssc
}
However when I run the following to start the streaming context:
// Stop any existing StreamingContext
val stopActiveContext = true
if (stopActiveContext) {
StreamingContext.getActive.foreach { _.stop(stopSparkContext = false) }
}
// Get or create a streaming context.
val ssc = StreamingContext.getActiveOrCreate(() => main.creatingFunc(sc))
// This starts the streaming context in the background.
ssc.start()
The last bit, ssc.start(), hangs indefinitely without issuing any log messages. I am running this on a freshly spun up cluster with no other notebooks attached so there aren't any other streaming contexts running.
Any thoughts?
Additionally, here are the libraries I am using (from my build.sbt file):
"org.apache.spark" % "spark-core_2.10" % "1.6.0"
"org.apache.spark" % "spark-sql_2.10" % "1.6.0"
"org.apache.spark" % "spark-streaming-kinesis-asl_2.10" % "1.6.0"
"org.apache.spark" % "spark-streaming_2.10" % "1.6.0"
Edit 1:
After running the code again recently, I got the following error:
java.rmi.RemoteException: java.util.concurrent.TimeoutException: Timed out retrying send to http://10.210.224.74:7070: 2 hours; nested exception is:
java.util.concurrent.TimeoutException: Timed out retrying send to http://10.210.224.74:7070: 2 hours
at com.databricks.backend.daemon.data.client.DbfsClient.send0(DbfsClient.scala:71)
at com.databricks.backend.daemon.data.client.DbfsClient.sendIdempotent(DbfsClient.scala:40)
at com.databricks.backend.daemon.data.client.DatabricksFileSystem.listStatus(DatabricksFileSystem.scala:189)
at org.apache.spark.streaming.util.FileBasedWriteAheadLog.initializeOrRecover(FileBasedWriteAheadLog.scala:228)
at org.apache.spark.streaming.util.FileBasedWriteAheadLog.<init>(FileBasedWriteAheadLog.scala:72)
at org.apache.spark.streaming.util.WriteAheadLogUtils$$anonfun$2.apply(WriteAheadLogUtils.scala:141)
at org.apache.spark.streaming.util.WriteAheadLogUtils$$anonfun$2.apply(WriteAheadLogUtils.scala:141)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.streaming.util.WriteAheadLogUtils$.createLog(WriteAheadLogUtils.scala:140)
at org.apache.spark.streaming.util.WriteAheadLogUtils$.createLogForDriver(WriteAheadLogUtils.scala:98)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker$$anonfun$createWriteAheadLog$1.apply(ReceivedBlockTracker.scala:254)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker$$anonfun$createWriteAheadLog$1.apply(ReceivedBlockTracker.scala:252)
at scala.Option.map(Option.scala:145)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker.createWriteAheadLog(ReceivedBlockTracker.scala:252)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker.<init>(ReceivedBlockTracker.scala:75)
at org.apache.spark.streaming.scheduler.ReceiverTracker.<init>(ReceiverTracker.scala:106)
at org.apache.spark.streaming.scheduler.JobScheduler.start(JobScheduler.scala:80)
at org.apache.spark.streaming.StreamingContext$$anonfun$liftedTree1$1$1.apply$mcV$sp(StreamingContext.scala:610)
at org.apache.spark.streaming.StreamingContext$$anonfun$liftedTree1$1$1.apply(StreamingContext.scala:606)
at org.apache.spark.streaming.StreamingContext$$anonfun$liftedTree1$1$1.apply(StreamingContext.scala:606)
at ... run in separate thread using org.apache.spark.util.ThreadUtils ... ()
at org.apache.spark.streaming.StreamingContext.liftedTree1$1(StreamingContext.scala:606)
at org.apache.spark.streaming.StreamingContext.start(StreamingContext.scala:600)
Caused by: java.util.concurrent.TimeoutException: Timed out retrying send to http://10.210.224.74:7070: 2 hours
at com.databricks.rpc.ReliableJettyClient.retryOnNetworkError(ReliableJettyClient.scala:138)
at com.databricks.rpc.ReliableJettyClient.sendIdempotent(ReliableJettyClient.scala:46)
at com.databricks.backend.daemon.data.client.DbfsClient.doSend(DbfsClient.scala:83)
at com.databricks.backend.daemon.data.client.DbfsClient.send0(DbfsClient.scala:60)
at com.databricks.backend.daemon.data.client.DbfsClient.sendIdempotent(DbfsClient.scala:40)
at com.databricks.backend.daemon.data.client.DatabricksFileSystem.listStatus(DatabricksFileSystem.scala:189)
at org.apache.spark.streaming.util.FileBasedWriteAheadLog.initializeOrRecover(FileBasedWriteAheadLog.scala:228)
at org.apache.spark.streaming.util.FileBasedWriteAheadLog.<init>(FileBasedWriteAheadLog.scala:72)
at org.apache.spark.streaming.util.WriteAheadLogUtils$$anonfun$2.apply(WriteAheadLogUtils.scala:141)
at org.apache.spark.streaming.util.WriteAheadLogUtils$$anonfun$2.apply(WriteAheadLogUtils.scala:141)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.streaming.util.WriteAheadLogUtils$.createLog(WriteAheadLogUtils.scala:140)
at org.apache.spark.streaming.util.WriteAheadLogUtils$.createLogForDriver(WriteAheadLogUtils.scala:98)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker$$anonfun$createWriteAheadLog$1.apply(ReceivedBlockTracker.scala:254)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker$$anonfun$createWriteAheadLog$1.apply(ReceivedBlockTracker.scala:252)
at scala.Option.map(Option.scala:145)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker.createWriteAheadLog(ReceivedBlockTracker.scala:252)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker.<init>(ReceivedBlockTracker.scala:75)
at org.apache.spark.streaming.scheduler.ReceiverTracker.<init>(ReceiverTracker.scala:106)
at org.apache.spark.streaming.scheduler.JobScheduler.start(JobScheduler.scala:80)
at org.apache.spark.streaming.StreamingContext$$anonfun$liftedTree1$1$1.apply$mcV$sp(StreamingContext.scala:610)
at org.apache.spark.streaming.StreamingContext$$anonfun$liftedTree1$1$1.apply(StreamingContext.scala:606)
at org.apache.spark.streaming.StreamingContext$$anonfun$liftedTree1$1$1.apply(StreamingContext.scala:606)
at org.apache.spark.util.ThreadUtils$$anon$1.run(ThreadUtils.scala:122)

SQLite with Slick and Play

So I'm trying to modify one of the typesafe activator templates to use an SQLite database instead of the built in H2 one. Here is the original template https://github.com/playframework/playframework/tree/master/templates/play-scala-intro
What I've done is to change the application.conf file to have these lines:
slick.dbs.default.driver=slick.driver.SQLiteDriver
slick.dbs.default.db.driver=org.sqlite.JDBC
slick.dbs.default.db.url="jdbc:sqlite:/home/marcin/play-scala-intro/people.db"
Of course I also created the file itself (just did touch people.db). Then if I start my application I am getting the following error:
[info] ! #6ooe822f0 - Internal server error, for (GET) [/] ->
[info]
[info] play.api.Configuration$$anon$1: Configuration error[Cannot connect to database [default]]
[info] at play.api.Configuration$.configError(Configuration.scala:178) ~[play_2.11-2.4.6.jar:2.4.6]
[info] at play.api.Configuration.reportError(Configuration.scala:829) ~[play_2.11-2.4.6.jar:2.4.6]
[info] at play.api.db.slick.DefaultSlickApi$DatabaseConfigFactory.create(SlickApi.scala:93) ~[play-slick_2.11-1.1.1.jar:1.1.1]
[info] at play.api.db.slick.DefaultSlickApi$DatabaseConfigFactory.get$lzycompute(SlickApi.scala:81) ~[play-slick_2.11-1.1.1.jar:1.1.1]
[info] at play.api.db.slick.DefaultSlickApi$DatabaseConfigFactory.get(SlickApi.scala:80) ~[play-slick_2.11-1.1.1.jar:1.1.1]
I was looking for some examples how to set it up like here
https://groups.google.com/forum/#!msg/scalaquery/07JBbnZ5VZk/7D1_5N4uGjsJ
or here:
https://github.com/playframework/play-slick
but they weren't similar enough to my code and since I'm new to all this I couldn't really figure out how to use them. Help appreciated, thanks!
[EDIT]:
following a suggestion from the comment I added "$" at the end of the driver name, to that what's in the conf file now looks like this:
slick.dbs.default.driver=slick.driver.SQLiteDriver$
slick.dbs.default.db.driver=org.sqlite.JDBC
slick.dbs.default.db.url="jdbc:sqlite:/home/marcin/play-scala-intro/people.db"
That works in the sense that another error comes up:
[info] Caused by: java.sql.SQLException: JDBC4 Connection.isValid() method not supported, connection test query must be configured
[info] at com.zaxxer.hikari.pool.BaseHikariPool.addConnection(BaseHikariPool.java:441) ~[HikariCP-java6-2.3.7.jar:na]
[info] at com.zaxxer.hikari.pool.BaseHikariPool$1.run(BaseHikariPool.java:413) ~[HikariCP-java6-2.3.7.jar:na]
[info] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_66]
Yes, this is quite old question, but maybe the answer can be useful for someone.
All the works is based upon the example presented here:
https://developer.lightbend.com/start/?group=play&project=play-samples-play-scala-slick-example
I've successfully run SQLite database with Scala/Play/Slick performing the following steps:
build.sbt file:
lazy val root = (project in file("."))
.enablePlugins(PlayScala)
.settings(
name := """Application""",
version := "2.8.x",
scalaVersion := "2.13.1",
libraryDependencies ++= Seq(
guice,
"org.playframework.anorm" %% "anorm" % "2.6.5",
"com.typesafe.play" %% "play-slick" % "5.0.0",
"com.typesafe.play" %% "play-slick-evolutions" % "5.0.0",
"org.xerial" % "sqlite-jdbc" % "3.31.1",
specs2 % Test,
),
scalacOptions ++= Seq(
"-feature",
"-deprecation",
"-Xfatal-warnings"
)
)
application.conf
slick.dbs.default.profile="slick.jdbc.SQLiteProfile$"
slick.dbs.default.db.profile="slick.driver.SQLiteDriver"
slick.dbs.default.db.url="jdbc:sqlite:/mnt/comments.db"
slick.dbs.default.db.driver=org.sqlite.JDBC
Please note, that it also works in the case for the relative path:
slick.dbs.default.profile="slick.jdbc.SQLiteProfile$"
slick.dbs.default.db.profile="slick.driver.SQLiteDriver"
slick.dbs.default.db.url="jdbc:sqlite:./comments.db"
slick.dbs.default.db.driver=org.sqlite.JDBC
Play Evolution also works:
play.evolutions {
db.default.enabled = true
}
I have play, slick, SQLite project: https://github.com/aukgit/scala-open-real-time-bidding-rtb
It also has a repository pattern.
Please check out https://github.com/aukgit/scala-open-real-time-bidding-rtb/releases/tag/v0.0.5
To give an example of how slick works with SQLite:
import slick.jdbc.SQLiteProfile.api._ // must import
lazy val db = Database.forURL(url = AbsoluteDatabasePath)
Repository pattern for slick
https://github.com/aukgit/scala-open-real-time-bidding-rtb/tree/6bf6beb6adb93b83cba49085d2d33269502189e1/app/shared/com/repository
You may download the repo in that release and run it using SBT
run sbt
or open using IntelliJ IDEA
Routers example of Play (https://github.com/aukgit/scala-open-real-time-bidding-rtb/blob/6bf6beb6adb93b83cba49085d2d33269502189e1/app/controllers/controllerRoutes/routerGeneric/RtbServiceBasicRouter.scala)
class RtbServiceBasicRouter #Inject()(
controller : RequestSimulatorServiceApiController)
extends SimpleRouter {
val routingActionWrapper : ControllerGenericActionWrapper = ControllerGenericActionWrapper(
ControllerDefaultActionType.Routing)
override def routes : Routes = {
try {
case GET(p"/serviceName") | GET(p"/") =>
controller.getServiceName()
case GET(p"/commands") | GET(p"/available-commands") | GET(p"/routes") =>
controller.getAvailableCommands()
case GET(p"/bannerRequest") =>
controller.getBannerRequestSample()
} catch {
case e : Exception =>
controller.handleError(e, routingActionWrapper)
throw e
}
}
}
Add routes controller to the routes (https://github.com/aukgit/scala-open-real-time-bidding-rtb/blob/6bf6beb6adb93b83cba49085d2d33269502189e1/conf/routes)
-> /services/v1/rtbSimulateService controllers.controllerRoutes.routerGeneric.RtbServiceBasicRouter
SBT Packages:
https://github.com/aukgit/scala-open-real-time-bidding-rtb/blob/6bf6beb6adb93b83cba49085d2d33269502189e1/build.sbt
Recommended packages for Sqlite in SBT:
"org.joda" % "joda-convert" % "2.2.1", // for time convert
"com.github.tototoshi" %% "slick-joda-mapper" % "2.4.2", // 2.4 doesn't work
"joda-time" % "joda-time" % "2.7",
"org.xerial" % "sqlite-jdbc" % "3.30.1", // sqlite driver
Slick Packages if you want to integrate :
"com.typesafe.slick" %% "slick" % "3.3.2",
"com.typesafe.slick" %% "slick-codegen" % "3.3.2", // for generating Table schema for sqlite db
Example for sqlite db to Table Schema generate (Database First Approach) [https://github.com/aukgit/scala-open-real-time-bidding-rtb/blob/6bf6beb6adb93b83cba49085d2d33269502189e1/app/shared/com/ortb/executors/DatabaseEngineCodeGenerator.scala] :
slick.codegen.SourceCodeGenerator.run(
profile = databaseGenerateConfig.profile,// "slick.jdbc.SQLiteProfile",
jdbcDriver = databaseGenerateConfig.jdbcDriver, //"org.sqlite.JDBC",
url = databaseGenerateConfig.compiledDatabaseUrl, //
// "jdbc:sqlite:D:\\PersonalWork\\Github\\scala-rtb-example\\src\\main\\resources\\openRTBSample.db",
outputDir = databaseGenerateConfig.compiledOutputDir, //
// "D:\\PersonalWork\\Github\\scala-rtb-example\\src\\main\\scala\\com\\ortb\\persistent\\schema",
pkg = databaseGenerateConfig.pkg,
user = None,
password = None,
ignoreInvalidDefaults = true,
outputToMultipleFiles = false
)

How to insert into MongoDB from actor using Casbah?

Assumptions:
MongoDB is running at localhost:27017
This project is modeled after https://github.com/sap1ens/akka-microservice. Please refer to it if there is not enough information below to help. If it gets too confusing, I can add code from other files if necessary.
Questions based on RegistrationsService.scala
Why is it trying to connect to MongoDB and insert the document on startup before any PostRegistrationMessage is sent to RegistrationsService actor?
Why is it failing?
How can I convert registrationJsValue into MongoDBObject and insert it into collection?
Relevant info in build.sbt
scalaVersion := "2.10.4"
val akkaVersion = "2.3.8"
val sprayVersion = "1.3.1"
// Main dependencies
libraryDependencies ++= Seq(
"com.typesafe.akka" %% "akka-actor" % akkaVersion,
"com.typesafe.akka" %% "akka-slf4j" % akkaVersion,
"io.spray" % "spray-can" % sprayVersion,
"io.spray" % "spray-routing" % sprayVersion,
"io.spray" % "spray-client" % sprayVersion,
"io.spray" %% "spray-json" % sprayVersion,
"com.typesafe" % "config" % "1.2.1",
"ch.qos.logback" % "logback-classic" % "1.1.2",
"org.mongodb" %% "casbah" % "2.7.4"
)
RegistrationService.scala (like ExampleService.scala in git repo)
package service
import akka.actor.{Props, ActorLogging, Actor}
import spray.json._
import com.mongodb.util.JSON
import com.mongodb.casbah.Imports._
import model.Registration
import model.RegistrationProtocol._
object RegistrationsService {
case class PostRegistrationMessage(registration: Registration)
def props(property: String) = Props(classOf[RegistrationsService], property)
}
class RegistrationsService(property: String) extends Actor with ActorLogging {
import RegistrationsService._
def receive = {
case PostRegistrationMessage(registration) => {
val registrationJsValue = registration.toJson
val dbObject = JSON.parse(registrationJsValue.toString()).asInstanceOf[DBObject]
val mongoClientURI = MongoClientURI("mongodb://localhost:27017/")
val mongoClient = MongoClient(mongoClientURI)
val someDB = mongoClient("somedb")
val registrationsColl = someDB("registratoins")
log.info(s"Got access to registratoins collection")
registrationsColl.insert(MongoDBObject("hello" -> "world"))
log.info(s"Inserted a doc to registratoins collection")
mongoClient.close()
log.info(s"Closed client connection to mongo")
sender() ! registrationJsValue
}
}
}
sbt run
14:01:10.655 [microservice-system-akka.actor.default-dispatcher-3] INFO c.e.a.s.RegistrationsService - Got access to registratoins collection
14:01:11.383 [microservice-system-akka.actor.default-dispatcher-3] DEBUG spray.can.server.HttpListener - Binding to localhost/127.0.0.1:8878
14:01:11.475 [microservice-system-akka.actor.default-dispatcher-3] DEBUG akka.io.TcpListener - Successfully bound to /127.0.0.1:8878
14:01:11.480 [microservice-system-akka.actor.default-dispatcher-6] INFO spray.can.server.HttpListener - Bound to localhost/127.0.0.1:8878
14:01:20.704 [microservice-system-akka.actor.default-dispatcher-5] ERROR akka.actor.OneForOneStrategy - Timed out after 10000 ms while waiting for a server that matches AnyServerSelector{}. Client view of cluster state is {type=Unknown, servers=[{address=localhost:27017, type=Unknown, state=Connecting, exception={java.lang.NullPointerException}}]
com.mongodb.MongoTimeoutException: Timed out after 10000 ms while waiting for a server that matches AnyServerSelector{}. Client view of cluster state is {type=Unknown, servers=[{address=localhost:27017, type=Unknown, state=Connecting, exception={java.lang.NullPointerException}}]
at com.mongodb.BaseCluster.getServer(BaseCluster.java:82) ~[mongo-java-driver-2.12.4.jar:na]
at com.mongodb.DBTCPConnector.getServer(DBTCPConnector.java:654) ~[mongo-java-driver-2.12.4.jar:na]
at com.mongodb.DBTCPConnector.access$300(DBTCPConnector.java:39) ~[mongo-java-driver-2.12.4.jar:na]
at com.mongodb.DBTCPConnector$MyPort.getConnection(DBTCPConnector.java:503) ~[mongo-java-driver-2.12.4.jar:na]
at com.mongodb.DBTCPConnector$MyPort.get(DBTCPConnector.java:451) ~[mongo-java-driver-2.12.4.jar:na]
at com.mongodb.DBTCPConnector.getPrimaryPort(DBTCPConnector.java:409) ~[mongo-java-driver-2.12.4.jar:na]
at com.mongodb.DBCollectionImpl.insert(DBCollectionImpl.java:182) ~[mongo-java-driver-2.12.4.jar:na]
at com.mongodb.DBCollectionImpl.insert(DBCollectionImpl.java:165) ~[mongo-java-driver-2.12.4.jar:na]
at com.mongodb.DBCollection.insert(DBCollection.java:93) ~[mongo-java-driver-2.12.4.jar:na]
at com.mongodb.casbah.MongoCollectionBase$class.insert(MongoCollection.scala:621) ~[casbah-core_2.10-2.7.4.jar:2.7.4]
at com.mongodb.casbah.MongoCollection.insert(MongoCollection.scala:1109) ~[casbah-core_2.10-2.7.4.jar:2.7.4]
at service.RegistrationsService$$anonfun$receive$1.applyOrElse(RegistrationsService.scala:47) ~[classes/:na]
at akka.actor.Actor$class.aroundReceive(Actor.scala:465) ~[akka-actor_2.10-2.3.8.jar:na]
at service.RegistrationsService.aroundReceive(RegistrationsService.scala:20) ~[classes/:na]
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) [akka-actor_2.10-2.3.8.jar:na]
at akka.actor.ActorCell.invoke(ActorCell.scala:487) [akka-actor_2.10-2.3.8.jar:na]
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254) [akka-actor_2.10-2.3.8.jar:na]
at akka.dispatch.Mailbox.run(Mailbox.scala:221) [akka-actor_2.10-2.3.8.jar:na]
at akka.dispatch.Mailbox.exec(Mailbox.scala:231) [akka-actor_2.10-2.3.8.jar:na]
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [scala-library.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [scala-library.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [scala-library.jar:na]
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [scala-library.jar:na]
^C14:01:42.231 [microservice-system-akka.actor.default-dispatcher-11] INFO akka.actor.LocalActorRef - Message [akka.actor.Terminated] from Actor[akka://microservice-system/user/$a#1754981697] to Actor[akka://microservice-system/user/IO-HTTP/listener-0#-602059531] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
14:01:42.236 [microservice-system-akka.actor.default-dispatcher-11] DEBUG akka.event.EventStream - shutting down: StandardOutLogger started
Not sure what you mean by the following.
Why is it trying to connect to MongoDB and insert the document on
startup before any PostRegistrationMessage is sent to
RegistrationsService actor?
In the example code (for RegistrationsService) you have shown the only connection that is made is inside the Actor's receive method. Unless you (or some other piece of code is sending this message, the actor will not try to connect to Mongo.
On a side note, what are you creating a Mongo connection inside the Actor's receive method ? You should avoid any heavy weight operation inside the receive method. You can create a Mongo connection outside the Actor and then inject it in the Actor's constructor to be used in the Receive method.
Have you written a standalone program/test that connects to your MongoDB using the casbah driver? It should work independently of your actor code and will help your isolate any issues (if there is one) with the driver and connection code.
PS: This may not the be the answer to your question but I didn't want to write it in the comments part because of space and formatting limitations.