java.lang.NoSuchFieldException: handle Embedded MongoDB with play framework - mongodb

I'm trying to enable in-memory mongodb in testing, I am using simplyscala to do the work.
class controllerSpec extends PlaySpec with GuiceOneAppPerTest with Injecting with BeforeAndAfterAll with MongoEmbedDatabase{
//declares a variable which will hold the reference to running mongoDB Instance
var mongoInstance: MongodProps = null
// Start In-memory Mongo instance in before statement
override def beforeAll(): Unit =
try{
val rnd = new scala.util.Random
val range = 12000 to 36000
val portNum = range(rnd.nextInt(range length))
mongoInstance = mongoStart(portNum) } //Try starting mongo on random port number
catch { case ex:Exception => } // Handle exception In case local mongo is running//code to run before all tests starts
override def afterAll(): Unit = mongoStop(mongoInstance)
However I have error :
[info] controllerSpec:
java.lang.NoSuchFieldException: handle
at java.base/java.lang.Class.getDeclaredField(Class.java:2411)
at de.flapdoodle.embed.process.runtime.Processes.windowsProcessId(Processes.java:109)
at de.flapdoodle.embed.process.runtime.Processes.access$200(Processes.java:51)
at de.flapdoodle.embed.process.runtime.Processes$PidHelper$2.getPid(Processes.java:209)
| => rat de.flapdoodle.embed.process.runtime.Processes.processId(Processes.java:72)
at de.flapdoodle.embed.process.runtime.ProcessControl.<init>(ProcessControl.java:64)
at de.flapdoodle.embed.process.runtime.ProcessControl.start(ProcessControl.java:205)
at de.flapdoodle.embed.process.runtime.AbstractProcess.<init>(AbstractProcess.java:98)
at de.flapdoodle.embed.mongo.AbstractMongoProcess.<init>(AbstractMongoProcess.java:53)
at de.flapdoodle.embed.mongo.MongodProcess.<init>(MongodProcess.java:50)
at de.flapdoodle.embed.mongo.MongodExecutable.start(MongodExecutable.java:44)
at de.flapdoodle.embed.mongo.MongodExecutable.start(MongodExecutable.java:34)
at de.flapdoodle.embed.process.runtime.Executable.start(Executable.java:101)
at com.github.simplyscala.MongoEmbedDatabase.mongoStart(MongoEmbedDatabase.scala:26)
at com.github.simplyscala.MongoEmbedDatabase.mongoStart$(MongoEmbedDatabase.scala:22)
at controllersSpec.ssmServiceSpec.ssmControllerSpec.mongoStart(ssmControllerSpec.scala:22)
at controllersSpec.ssmServiceSpec.ssmControllerSpec.beforeAll(ssmControllerSpec.scala:32)
at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:212)
at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
at controllersSpec.ssmServiceSpec.ssmControllerSpec.run(ssmControllerSpec.scala:22)
at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:317)
at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:510)
at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:304)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Does it have something to do with the BeforeAndAfterAll? Please tell me if I did anything wrong thank you.

Upgrade the version of the Embedded MongoDB dependency test driver to 3.4.3
Gradle:
testImplementation 'de.flapdoodle.embed:de.flapdoodle.embed.mongo:3.4.3'
pom.xml
<dependency>
<groupId>de.flapdoodle.embed</groupId>
<artifactId>de.flapdoodle.embed.mongo</artifactId>
<version>3.4.3</version> <!-- make this version a property instead-->
<scope>test</scope>
</dependency>

Related

Test Spark Scala with Maven Got Error: java.lang.NoClassDefFoundError

I tried to Test Spark Scala on Scala IDE (eclipse) with Maven but keep getting error:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream
at org.apache.spark.SparkConf.loadFromSystemProperties(SparkConf.scala:73)
at org.apache.spark.SparkConf.<init>(SparkConf.scala:68)
at org.apache.spark.SparkConf.<init>(SparkConf.scala:55)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:904)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
at com.SimpleApp$.main(SimpleApp.scala:7)
at com.SimpleApp.main(SimpleApp.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FSDataInputStream
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 9 more
The program I try is the Quick Start code, from the Spark Documentation:
import org.apache.spark.sql.SparkSession
object SimpleApp {
def main(args: Array[String]) {
val logFile = "YOUR_SPARK_HOME/README.md" // Should be some file on your system
val spark = SparkSession.builder.appName("Simple Application").getOrCreate()
val logData = spark.read.textFile(logFile).cache()
val numAs = logData.filter(line => line.contains("a")).count()
val numBs = logData.filter(line => line.contains("b")).count()
println(s"Lines with a: $numAs, Lines with b: $numBs")
spark.stop()
}
}
I use Spark 2.2.0 and Scala 2.11.7. The pom.xml file is:
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.2.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-sql -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.2.0</version>
</dependency>
I followed a solution from another thread: NoClassDefFoundError com.apache.hadoop.fs.FSDataInputStream when execute spark-shell
But it doesn't work for me. The content in my spark-env.sh file is:
# If 'hadoop' binary is on your PATH
export SPARK_DIST_CLASSPATH=$(hadoop classpath)
# With explicit path to 'hadoop' binary
export SPARK_DIST_CLASSPATH=$(/usr/local/hadoop/bin/hadoop classpath)
# Passing a Hadoop configuration directory
export SPARK_DIST_CLASSPATH=$(hadoop --config /usr/local/hadoop/etc/hadoop classpath)
Could anybody help me with this? Appreciate your help.
Devesh's answer solve parts of my problem. However, I got other problems:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
18/08/17 10:34:03 INFO SparkContext: Running Spark version 2.2.0
18/08/17 10:34:03 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/08/17 10:34:03 WARN Utils: Your hostname, toshiba0 resolves to a loopback address: 127.0.1.1; using 192.168.1.217 instead (on interface wlp2s0)
18/08/17 10:34:03 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
18/08/17 10:34:03 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: A master URL must be set in your configuration
at org.apache.spark.SparkContext.<init>(SparkContext.scala:376)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
at com.SimpleApp$.main(SimpleApp.scala:11)
at com.SimpleApp.main(SimpleApp.scala)
18/08/17 10:34:03 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: A master URL must be set in your configuration
at org.apache.spark.SparkContext.<init>(SparkContext.scala:376)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
at com.SimpleApp$.main(SimpleApp.scala:11)
at com.SimpleApp.main(SimpleApp.scala)
I don't know why Spark says my loopback address is 127.0.1.1, I checked my configuration: /etc/network/interfaces, it's auto loopback, and I ping 127.0.0.1. It works.
I followed the solution from this link Error initializing SparkContext: A master URL must be set in your configuration
and put the following code, because I use my laptop. It still doesn't work.
val conf = new SparkConf().setMaster("local[2]")
Don't know what happen to my settings. Thank you!
Just add following in maven pom.xml file
<!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-client -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.7.0</version>
</dependency>
In previous versions of Spark, you had to create a SparkConf and SparkContext to interact with Spark whereas in Spark 2.0 onwards the same effects can be achieved through SparkSession, without explicitly creating SparkConf, SparkContext or SQLContext, as they’re encapsulated within the SparkSession
** sample code snippet:-**
import org.apache.spark.sql.SparkSession
object SimpleApp {
def main(args: Array[String]) {
val logFile = "YOUR_SPARK_HOME/README.md" // some file on system
val spark = SparkSession
.builder
.appName("Simple Application")
.master("local[2]")
.getOrCreate()
val logData = spark.read.textFile(logFile).cache()
val numAs = logData.filter(line => line.contains("a")).count()
val numBs = logData.filter(line => line.contains("b")).count()
println(s"Lines with a: $numAs, Lines with b: $numBs")
}
}

MongoDB Scala Driver doesn't allow use Map collection in case class

I just started using MongoDB and I'm trying to write a small application to test Mongo with Scala. I created the following case class in order to cast the Documents to a Scala class:
case class User(
_id: ObjectId,
userId: String,
items: Map[String, Int]
)
object User {
def apply(userId: String , items: Map[String, Int]): User =
new User(new ObjectId, userId, items)
implicit val codecRegistry: CodecRegistry =
fromRegistries(fromProviders(classOf[User]), DEFAULT_CODEC_REGISTRY)
}
I get the following error but I don't know why since the Map keys are in fact strings.
[ERROR] error: Maps must contain string types for keys
[INFO] implicit val codecRegistry: CodecRegistry = fromRegistries (fromProviders (classOf [User]), DEFAULT_CODEC_REGISTRY)
[INFO] ^
[ERROR] one error found
I'm also applying the codecRegistry to the MongoDatabase.
Thank you very much.
The problem was that I was using a version of the driver that is compiled for Scala 2.11 and not 2.12. By changing the Maven dependency from
<dependency>
<groupId>org.mongodb.scala</groupId>
<artifactId>mongo-scala-driver_2.11</artifactId>
<version>2.2.1</version>
</dependency>
to
<dependency>
<groupId>org.mongodb.scala</groupId>
<artifactId>mongo-scala-driver_2.12</artifactId>
<version>2.2.1</version>
</dependency>
solved the problem.

Play WS Standalone: Instantiating StandaloneAhcWSClient causes some wierd issues

What I'm trying to accomplish is pretty much straight forward:
Instantiate a standalone instance as described here
calling the remote service
However, if I run the test class I get this error:
java.lang.NoSuchMethodError: scala.Product.$init$(Lscala/Product;)V at akka.util.Timeout.
You'll find the full stacktrace further below.
Any idea what could cause the issue and how to make the example work?
package restclient
import org.junit.runner.RunWith
import org.scalatest._
import org.scalatest.junit.JUnitRunner
#RunWith(classOf[JUnitRunner])
class RestClientTest extends WordSpecLike with ShouldMatchers {
trait Fixture {
val restClient = new RestClientImpl
}
"The Rest client" should {
"" in new Fixture {
val res = restClient.call("http://localhost:3000/post")
res shouldBe "c"
}
}
}
=============================================
package restclient
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import play.api.libs.ws.ahc._
// still WIP
trait RestClient {
def call(url: String): String
}
class RestClientImpl extends RestClient {
override def call(url: String): String = {
// Create Akka system for thread and streaming management
implicit val system = ActorSystem()
system.registerOnTermination {
System.exit(0)
}
implicit val m = ActorMaterializer()
val wsClient = StandaloneAhcWSClient()
"x"
}
}
=============================================
java.lang.NoSuchMethodError: scala.Product.$init$(Lscala/Product;)V
at akka.util.Timeout.(Timeout.scala:13)
at akka.actor.ActorSystem$Settings.(ActorSystem.scala:327)
at akka.actor.ActorSystemImpl.(ActorSystem.scala:650)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:244)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:287)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:232)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:223)
at
restclient.RestClientImpl.call(RestClientImpl.scala:28)
at
restclient.RestClientTest$$anonfun$1$$anonfun$apply$mcV$sp$1$$anon$1.(RestClientTest.scala:16)
at restclient.RestClientTest$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(RestClientTest.scala:15)
at restclient.RestClientTest$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(RestClientTest.scala:15)
at restclient.RestClientTest$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(RestClientTest.scala:15)
at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
at org.scalatest.Transformer.apply(Transformer.scala:20)
at org.scalatest.WordSpecLike$$anon$1.apply(WordSpecLike.scala:953)
at org.scalatest.Suite$class.withFixture(Suite.scala:1122)
at restclient.RestClientTest.withFixture(RestClientTest.scala:8)
at
=============================================
<dependency>
<groupId>com.typesafe.play</groupId>
<artifactId>play-ahc-ws-standalone_2.12</artifactId>
<version>1.0.7</version>
</dependency>
<dependency>
<groupId>com.typesafe.play</groupId>
<artifactId>play-ws-standalone-json_2.12</artifactId>
<version>1.0.7</version>
</dependency>

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.commons.beanutils.PropertyUtilsBean

I'm trying to read properties file and got stuck with error which is given below.I have written an Scala package where i'm trying to read properties file and call into abc.scala program.Any help will be appreciated.
File:- xyz.properties
driver = "oracle.jdbc.driver.OracleDriver"
url = "jdbc:oracle:thin:#xxxx:1521/xxxx.xxxx"
username = "xxx"
password = "xxx"
input_file = "C:\\Users\\xxx\\test\\src\\main\\resources\\xxxx.xlsx"
build.sbt
name := "xxx.xxxx.xxxxx"
scalaVersion := "2.10.6"
ivyScala := ivyScala.value map{ _.copy(overrideScalaVersion = true) }
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "2.1.0",
"com.databricks" %% "spark-csv" % "1.5.0",
"org.apache.commons" % "commons-configuration2" % "2.1.1",
"commons-beanutils" % "commons-beanutils" % "1.9.3",
"org.apache.spark" %% "spark-sql" % "2.1.0",
"org.scala-lang" % "scala-xml" % "2.11.0-M4" )
Package
package com.xxx.zzzz.xxx1
import java.io.File
import org.apache.commons.configuration2.builder.fluent.{Configurations, Parameters}
object Configuration {
var config = new Configurations()
var configs = config.properties(new File("xyz.properties"))
var inputFile = configs.getString("input")
var userName = configs.getString("user_name")
var password = configs.getString("passwd")
var driver = configs.getString("driver")
var url = configs.getString("Url")
}
Main Program abc.scala
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.SQLContext
import package com.xxx.zzzz.xxx1.Configurations
import org.apache.commons.beanutils.PropertyUtils
object ItalyPanelData {
def main(args: Array[String]): Unit = {
//Logger.getRootLogger().setLevel(Level.OFF)
println("Inside main program"+ Configuration.driver)
//Set the properties for spark to connect the oracle database
val dbProp = new java.util.Properties
dbProp.setProperty("driver", Configuration.driver)
dbProp.setProperty("user", Configuration.userName)
dbProp.setProperty("password", Configuration.password)
//Create a connection to connect spark
val conf = new SparkConf().setAppName("Simple Application").setMaster("local")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
//exception handlying
try {
//Create dataframe boject
val df = sqlContext.read
.option("location", Configuration.inputFile) //Initiating input path
.option("sheetName", "xyz") //Give the SheetName
.option("useHeader", "true") //It takes the header name from excel sheet
.option("treatEmptyValuesAsNulls", "true")
.option("inferSchema", "true")
.option("addColorColumns", "false")
.load()
// Write into oracale database
df.write.mode("append").jdbc(Configuration.url, "xyz", dbProp)
}
catch {
case e: Throwable => e.printStackTrace();
}
}
}
Error
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.commons.beanutils.PropertyUtilsBean.addBeanIntrospector(Lorg/apache/commons/beanutils/BeanIntrospector;)V
at org.apache.commons.configuration2.beanutils.BeanHelper.initBeanUtilsBean(BeanHelper.java:631)
at org.apache.commons.configuration2.beanutils.BeanHelper.<clinit>(BeanHelper.java:89)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at com.sun.proxy.$Proxy0.<clinit>(Unknown Source)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.reflect.Proxy.newProxyInstance(Proxy.java:739)
at org.apache.commons.configuration2.builder.fluent.Parameters.createParametersProxy(Parameters.java:294)
at org.apache.commons.configuration2.builder.fluent.Parameters.fileBased(Parameters.java:185)
at org.apache.commons.configuration2.builder.fluent.Configurations.fileParams(Configurations.java:602)
at org.apache.commons.configuration2.builder.fluent.Configurations.fileParams(Configurations.java:614)
at org.apache.commons.configuration2.builder.fluent.Configurations.fileBasedBuilder(Configurations.java:132)
at org.apache.commons.configuration2.builder.fluent.Configurations.propertiesBuilder(Configurations.java:238)
at org.apache.commons.configuration2.builder.fluent.Configurations.properties(Configurations.java:282)
at com.rxcorp.italy.config.Configuration$.<init>(Configuration.scala:8)
at com.rxcorp.italy.config.Configuration$.<clinit>(Configuration.scala)
at com.rxcorp.paneldataloading.ItalyPanelData$.main(abc.scala:12)
Such exceptions are an indication of a version incompatibility.
Meaning: the code that you have written (or more likely: the one of the libraries under the surface) wants to call a method
org.apache.commons.beanutils.PropertyUtilsBean.addBeanIntrospector(BeanIntrospector[]);
but the thing is: at runtime, the class file for PropertyUtilsBean does not contain that method.
Thus: you have to step back and understand the components in your stack, and check out their version requirements on the Apache commons library.
And you get more ideas when looking into the javadoc for that method; as it says: Since: 1.9 there.
In other words: this method was added Apache commons 1.9; so some piece of your stack expects at least that version of commons; but your classpath in the JVM that executes the whole thing ... has an older version.
So: check the classpath for apache commons; and most likely you are good by simply updating to a newer version of apache commons. (and yes, maybe that will mean more "hard" debug work; as at least your build settings include a newer version of apache commons).
I guess I have a similar problem. Apache commons configuration 2.7 is used in our project together with apache commons BeanUtils 1.9.
Unfortunately another library we use is jxls-reader 2.0.+ and this one references commons-digester3 library.
So the beanutils 1.9 as well as the commons-digester3 lib both have a class packaged org.apache.commons.beanutils.PropertyUtilsBean. But commons-digester3's version does not have the above mentioned method bringing us to the same dilemma as you have.
For now we can be lucky as our windows servers loading the "correct" version of beanutils first whereas some developers using a mac have it the other way round where the digester3 package is loaded first bringing up the no-such-method-error you have.
Not sure what can be our workaround here.
Anyway check if you have the class two times on your classpath and find out who's using it by checking all your pom.xmls of dependent libs on the classpath. Finally you might be lucky to remove some library if its not needed by your code (chances are low :-( though)
Update 10thNov: I exluded the commons-digester3 from the jxls-reader dependency:
<dependency>
<groupId>org.jxls</groupId>
<artifactId>jxls-reader</artifactId>
<version>2.0.3</version>
<exclusions>
<exclusion>
<groupId>org.apache.commons</groupId>
<artifactId>commons-digester3</artifactId>
</exclusion>
</exclusions>
</dependency>
So that the commons-digester with classifier "with-deps" from jxls-reader won't get resolved and I pull it in explicitely in our pom.xml but only the normal jar without packaged classes of commons-logging, commons-beanutils...

Postgres JDBC in scala unable to run two queries without restarting sbt

I am connecting to a postgres database with scala (2.9.2).
First time I make a SELECT (by running the code with 'run' in the sbt terminal in IntelliJ) it works well, but if I do 'run' again in the sbt shell I get an error that claims:
[error] (run-main) java.sql.SQLException: No suitable driver found for jdbc:postgresql://localhost/db
java.sql.SQLException: No suitable driver found for jdbc:postgresql://localhost/db
at java.sql.DriverManager.getConnection(DriverManager.java:602)
at java.sql.DriverManager.getConnection(DriverManager.java:185)
at SequenceGenerator$.connect(Validator.scala:50)
at SequenceGenerator$.generate(Validator.scala:54)
at Main$.main(Validator.scala:32)
at Main.main(Validator.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
I have installed the postgres connector through sbt, this row in my build.sbt file.
libraryDependencies += "postgresql" % "postgresql" % "9.1-901.jdbc4"
This is my code for making a SELECT:
object SequenceGenerator{
def connect() = {
DriverManager.getConnection("jdbc:postgresql://localhost/db","user", "pass")
}
def generate() = {
val db = connect()
val st = db.createStatement
val res = st.executeQuery("SELECT value from table LIMIT 2")
while( res.next) {
println(res.getString("value"))
}
}
}
We had to add this to our integration tests startup to get around the same issue with the mysql jdbc driver:
Class.forName("com.mysql.jdbc.Driver").newInstance
I'm not sure why SBT unloads the driver, but something like this would probably work for you:
object SequenceGenerator{
//Use whatever your jdbc driver class should be here, I'm just guessing
Class.forName("org.postgresql.Driver").newInstance
def connect() = {
DriverManager.getConnection("jdbc:postgresql://localhost/db","user", "pass")
}
def generate() = {
val db = connect()
val st = db.createStatement
val res = st.executeQuery("SELECT value from table LIMIT 2")
while( res.next) {
println(res.getString("value"))
}
}
}
The main problem is that you are trying to load the driver using the old DriverManager approach. You have to use Database.forDataSource approach in slick and use a proper Database Pool like C3P0.
I think this is a classloading issue between Slick, SBT and the Driver.
Obviously make sure that the driver is in the classpath.