I'm trying to test some SQL statements developed with doobie using Testcontainers libraries. In detail, I'm using the following dependencies:
"org.testcontainers" % "testcontainers" % "1.17.3" % Test,
"org.testcontainers" % "postgresql" % "1.17.3" % Test
My domain model is straightforward:
final case class Company(id: Int, name: String, description: String)
I create the table on PostgreSQL using an init script:
CREATE TABLE companies (
id serial NOT NULL,
name text NOT NULL,
description text
);
ALTER TABLE companies
ADD CONSTRAINT pk_companies PRIMARY KEY (id);
I'm trying to develop an integration test that verifies the insertion of a new Company. I'm using scalatest:
"org.scalatest" %% "scalatest" % scalaTestVersion % Test,
"org.typelevel" %% "cats-effect-testing-scalatest" % scalaTestCatsEffectVersion % Test
The code of the test is the following:
class CompaniesSpec extends AsyncFreeSpec with AsyncIOSpec with BeforeAndAfter with Matchers {
private val transactor: Resource[IO, Transactor[IO]] = for {
postgres <- makePostgres
ce <- ExecutionContexts.fixedThreadPool[IO](1)
xa <- HikariTransactor.newHikariTransactor[IO](
"org.postgresql.Driver",
postgres.getJdbcUrl,
postgres.getUsername,
postgres.getPassword,
ce
)
} yield xa
private def makePostgres =
Resource.make(IO {
val container: PostgreSQLContainer[Nothing] =
new PostgreSQLContainer().withInitScript("sql/companies.sql")
container.start()
container
})(c => IO(c.stop()))
"Companies algebra " - {
"should create a new Company" in {
(for {
companyId <- transactor.use { xa =>
sql"INSERT INTO companies (name, description) VALUES ('My Company', 'My Company Description')"
.update
.withUniqueGeneratedKeys[Int]("id")
.transact(xa)
}
maybeCompany <- transactor.use { xa =>
sql"SELECT * FROM companies WHERE id = $companyId"
.query[Company]
.option
.transact(xa)
}
} yield maybeCompany)
.asserting { company =>
company shouldBe defined
company.get.name shouldBe "My Company"
company.get.description shouldBe "My Company Description"
}
}
}
}
However, the test's assertions fail with the following error, which indicates that the code is not inserting anything into the database.
None was not defined
ScalaTestFailureLocation: com.rockthejvm.board.playground.algebras.CompaniesSpec at (CompaniesSpec.scala:68)
I really can't understand why the test is failing.
Can anyone help me?
makePostgres is a Resource which you compose with the HikariTransactor into the transaction Resource. That means that every time you call transaction.use{ ... } a new container is started and a new hikari connection pool is started. When the code in use finishes both are stopped.
Because you execute both queries in a different transactor.use{ ... } block they both run in a different container and on a different connection pool.
Instead you should make sure you use the same resources throughout your test.
transactor.use { xa =>
for {
companyId <-
sql"INSERT INTO companies (name, description) VALUES ('My Company', 'My Company Description')"
.update
.withUniqueGeneratedKeys[Int]("id")
.transact(xa)
maybeCompany <-
sql"SELECT * FROM companies WHERE id = $companyId"
.query[Company]
.option
.transact(xa)
} yield maybeCompany
}
.asserting { company =>
company shouldBe defined
company.get.name shouldBe "My Company"
company.get.description shouldBe "My Company Description"
}
Related
package com.knoldus
import org.apache.flink.api.java.utils.ParameterTool
import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.api.windowing.time.Time
object SocketWindowWordCount {
def main(args: Array[String]) : Unit = {
var hostname: String = "localhost"
var port: Int = 9000
try {
val params = ParameterTool.fromArgs(args)
hostname = if (params.has("hostname")) params.get("hostname") else "localhost"
port = params.getInt("port")
} catch {
case e: Exception => {
System.err.println("No port specified. Please run 'SocketWindowWordCount " +
"--hostname <hostname> --port <port>', where hostname (localhost by default) and port " +
"is the address of the text server")
System.err.println("To start a simple text server, run 'netcat -l <port>' " +
"and type the input text into the command line")
return
}
}
// get the execution environment
val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment
// get input data by connecting to the socket
val text: DataStream[String] = env.socketTextStream(hostname, port, '\n')
// parse the data, group it, window it, and aggregate the counts
val windowCounts = text
.flatMap { w => w.split("\\s") }
.map { w => WordWithCount(w, 1) }
.keyBy("word")
.timeWindow(Time.seconds(5))
.sum("count")
// print the results with a single thread, rather than in parallel
windowCounts.print().setParallelism(1)
env.execute("Socket Window WordCount")
}
/** Data type for words with count */
case class WordWithCount(word: String, count: Long)
}
In the end while running this code on Intellij i am getting this error
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/flink/streaming/api/scala/StreamExecutionEnvironment$
at com.knoldus.SocketWindowWordCount$.main(SocketWindowWordCount.scala:43)
at com.knoldus.SocketWindowWordCount.main(SocketWindowWordCount.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.flink.streaming.api.scala.StreamExecutionEnvironment$
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 2 more
Select menu item "Run" => "Edit Configurations...",
then in the "Build and run" section select "Modify options" => Java => Add dependencies with "Provided" scope to classpath in your local configuration.
In this way you don't have to remove the <scope>provided</scope>.
I resolved this by removing the <scope>provided</scope> which was present in the maven import.
I am running a streaming job on Flink 1.9.1 cluster and trying to get a histogram of values into our Prometheus metric collector. Per recommendation in Flink docs, I used Dropwizard histogram implementation with the Flink-provided wrapper, however when submitting job onto cluster, it crashes with the following traceback:
java.lang.LinkageError: loader constraint violation: when resolving method "org.apache.flink.dropwizard.metrics.DropwizardHistogramWrapper.<init>(Lcom/codahale/metrics/Histogram;)V" the class loader (instance of org/apache/flink/util/ChildFirstClassLoader) of the current class, com/example/foo/metrics/FooMeter, and the class loader (instance of sun/misc/Launcher$AppClassLoader) for the method's defining class, org/apache/flink/dropwizard/metrics/DropwizardHistogramWrapper, have different Class objects for the type com/codahale/metrics/Histogram used in the signature
at com.example.foo.metrics.FooMeter.<init>(FooMeter.scala:11)
at com.example.foo.transform.ValidFoos$.open(ValidFoos.scala:15)
at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
at org.apache.flink.streaming.api.operators.StreamFlatMap.open(StreamFlatMap.java:43)
at org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:532)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:396)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
at java.lang.Thread.run(Thread.java:748)
I have found similar error in the mailing list, however using shadowJar plugin in Gradle didn't help.
Is there something I am missing?
Relevant code here:
import com.codahale.metrics.{Histogram, SlidingWindowReservoir}
import org.apache.flink.dropwizard.metrics.DropwizardHistogramWrapper
import org.apache.flink.metrics.{MetricGroup, Histogram => FlinkHistogram}
class FooMeter(metricGroup: MetricGroup, name: String) {
private var histogram: FlinkHistogram = metricGroup.histogram(
name, new DropwizardHistogramWrapper(new Histogram(new SlidingWindowReservoir(500))))
def record(fooValue: Long): Unit = {
histogram.update(fooValue)
}
}
object ValidFoos extends RichFlatMapFunction[Try[FooData], Foo] {
#transient private var fooMeter: FooMeter = _
override def open(parameters: Configuration): Unit = {
fooMeter = new FooMeter(getRuntimeContext.getMetricGroup, "foo_values")
}
override def flatMap(value: Try[FooData], out: Collector[FooData]): Unit = {
Transform.validFoo(value) foreach (foo => {
fooMeter.record(foo.value)
out.collect(foo)
})
}
}
build.gradle:
plugins {
id 'scala'
id 'application'
id 'com.github.johnrengelman.shadow' version '2.0.4'
}
ext {
flinkVersion = "1.9.1"
scalaBinaryVersion = "2.11"
scalaVersion = "2.11.12"
}
dependencies {
implementation(
"org.apache.flink:flink-streaming-scala_${scalaBinaryVersion}:${flinkVersion}",
"org.apache.flink:flink-connector-kafka_${scalaBinaryVersion}:${flinkVersion}",
"org.apache.flink:flink-runtime-web_${scalaBinaryVersion}:${flinkVersion}",
"org.apache.flink:flink-json:${flinkVersion}"
"org.apache.flink:flink-metrics-dropwizard:${flinkVersion}",
"org.scala-lang:scala-library:${scalaVersion}",
)
}
shadowJar {
relocate("org.apache.flink.dropwizard", "com.example.foo.shaded.dropwizard")
relocate("com.codahale", "com.example.foo.shaded.codahale")
}
jar {
zip64 = true
archiveName = rootProject.name + '-all.jar'
manifest {
attributes('Main-Class': 'com.example.foo.Foo')
}
from {
configurations.compileClasspath.collect {
it.isDirectory() ? it : zipTree(it)
}
configurations.runtimeClasspath.collect {
it.isDirectory() ? it : zipTree(it)
}
}
}
Further Info:
Running the code locally works
Flink cluster is custom-compiled with following directory structure:
# find /usr/lib/flink/
/usr/lib/flink/
/usr/lib/flink/plugins
/usr/lib/flink/plugins/flink-metrics-influxdb-1.9.1.jar
/usr/lib/flink/plugins/flink-s3-fs-hadoop-1.9.1.jar
/usr/lib/flink/plugins/flink-metrics-graphite-1.9.1.jar
/usr/lib/flink/plugins/flink-metrics-prometheus-1.9.1.jar
/usr/lib/flink/plugins/flink-cep_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-python_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-queryable-state-runtime_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-sql-client_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-metrics-slf4j-1.9.1.jar
/usr/lib/flink/plugins/flink-state-processor-api_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-oss-fs-hadoop-1.9.1.jar
/usr/lib/flink/plugins/flink-metrics-statsd-1.9.1.jar
/usr/lib/flink/plugins/flink-swift-fs-hadoop-1.9.1.jar
/usr/lib/flink/plugins/flink-gelly-scala_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-azure-fs-hadoop-1.9.1.jar
/usr/lib/flink/plugins/flink-metrics-datadog-1.9.1.jar
/usr/lib/flink/plugins/flink-shaded-netty-tcnative-dynamic-2.0.25.Final-7.0.jar
/usr/lib/flink/plugins/flink-s3-fs-presto-1.9.1.jar
/usr/lib/flink/plugins/flink-cep-scala_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-gelly_2.11-1.9.1.jar
/usr/lib/flink/lib
/usr/lib/flink/lib/flink-metrics-influxdb-1.9.1.jar
/usr/lib/flink/lib/flink-metrics-graphite-1.9.1.jar
/usr/lib/flink/lib/flink-metrics-prometheus-1.9.1.jar
/usr/lib/flink/lib/flink-table_2.11-1.9.1.jar
/usr/lib/flink/lib/flink-metrics-slf4j-1.9.1.jar
/usr/lib/flink/lib/log4j-1.2.17.jar
/usr/lib/flink/lib/slf4j-log4j12-1.7.15.jar
/usr/lib/flink/lib/flink-metrics-statsd-1.9.1.jar
/usr/lib/flink/lib/flink-metrics-datadog-1.9.1.jar
/usr/lib/flink/lib/flink-table-blink_2.11-1.9.1.jar
/usr/lib/flink/lib/flink-dist_2.11-1.9.1.jar
/usr/lib/flink/bin/...
I’m working my way through the Lift Application Development Cookbook by Gilberto T. Garcia Jr and have run up against a problem I can’t seem to resolve. I’ve copied the source code Chap06-map-table and I’m trying to modify it to work with my IBM i (iSeries, AS/400, i5) database. I was able to make it work with the first type of connection using Squeryl Record. However, I can’t seem to figure how to get this to work using a JNDI Datasource. I’ve spent a couple of days searching the internet for examples of setting this up and have not found a good example involving a DB/400 database connection. Below is the error I get when I attempt to start the container and the code I’ve modified in an effort to make it work. Any help would be appreciated.
There seems to be some choices for the data source class from jt4oo.jar (jtOpen) and I’m not sure which would be the best to use or perhaps there’s another. I’ve been trying this with each of the three and am assuming the first is the correct one.
com.ibm.as400.access.AS400JDBCManagedConnectionPoolDataSource
com.ibm.as400.access.AS400JDBCConnectionPoolDataSource
com.ibm.as400.access.AS400JDBCDataSource
Thanks. Bob
This is the start of the error:
> container:start
[info] jetty-8.0.4.v20111024
[info] No Transaction manager found - if your webapp requires one, please config
ure one.
[info] NO JSP Support for /, did not find org.apache.jasper.servlet.JspServlet
[info] started o.e.j.w.WebAppContext{/,[file:/C:/Users/Bob/Lift26Projects/scala_
210/chap06-map-table/src/main/webapp/]}
[info] started o.e.j.w.WebAppContext{/,[file:/C:/Users/Bob/Lift26Projects/scala_
210/chap06-map-table/src/main/webapp/]}
18:21:47.062 [pool-7-thread-1] ERROR n.liftweb.http.provider.HTTPProvider - Fail
ed to Boot! Your application may not run properly
java.sql.SQLException: The application requester cannot establish the connection
. ("jdbc:as400://www.busapp.com;libraries=PLAY2TEST";naming=system;errors=full;)
at com.ibm.as400.access.JDError.throwSQLException(JDError.java:524) ~[jt
400-6.7.jar:JTOpen 6.7]
at com.ibm.as400.access.AS400JDBCConnection.setProperties(AS400JDBCConne
ction.java:3142) ~[jt400-6.7.jar:JTOpen 6.7]
at com.ibm.as400.access.AS400JDBCManagedDataSource.createPhysicalConnect...
My Build.sbt File:
name := "Lift 2.5 starter template"
version := "0.0.1"
organization := "net.liftweb"
scalaVersion := "2.10.0"
resolvers ++= Seq("snapshots" at "http://oss.sonatype.org/content/repositories/snapshots",
"staging" at "http://oss.sonatype.org/content/repositories/staging",
"releases" at "http://oss.sonatype.org/content/repositories/releases"
)
seq(com.github.siasia.WebPlugin.webSettings :_*)
unmanagedResourceDirectories in Test <+= (baseDirectory) { _ / "src/main/webapp" }
scalacOptions ++= Seq("-deprecation", "-unchecked")
env in Compile := Some(file("./src/main/webapp/WEB-INF/jetty-env.xml") asFile)
libraryDependencies ++= {
val liftVersion = "2.5"
Seq(
"net.liftweb" %% "lift-webkit" % liftVersion % "compile",
"net.liftmodules" %% "lift-jquery-module_2.5" % "2.3",
"org.eclipse.jetty" % "jetty-webapp" % "8.0.4.v20111024" % "container",
"org.eclipse.jetty" % "jetty-plus" % "8.0.4.v20111024" % "container",
"ch.qos.logback" % "logback-classic" % "1.0.6",
"org.specs2" %% "specs2" % "1.14" % "test",
"net.liftweb" %% "lift-squeryl-record" % liftVersion % "compile",
"net.sf.jt400" % "jt400" % "6.7",
"org.liquibase" % "liquibase-maven-plugin" % "3.0.2"
)
}
This is my boot.scala file:
package bootstrap.liftweb
import _root_.liquibase.database.DatabaseFactory
import _root_.liquibase.database.jvm.JdbcConnection
import _root_.liquibase.exception.DatabaseException
import _root_.liquibase.Liquibase
import _root_.liquibase.resource.FileSystemResourceAccessor
import net.liftweb._
import util._
import Helpers._
import common._
import http._
import sitemap._
import Loc._
import net.liftmodules.JQueryModule
import net.liftweb.http.js.jquery._
import net.liftweb.squerylrecord.SquerylRecord
import org.squeryl.Session
import java.sql.{SQLException, DriverManager}
import org.squeryl.adapters.DB2Adapter
import javax.naming.InitialContext
import javax.sql.DataSource
import code.model.LiftBookSchema
/**
* A class that's instantiated early and run. It allows the application
* to modify lift's environment
*/
class Boot {
def runChangeLog(ds: DataSource) {
val connection = ds.getConnection
try {
val database = DatabaseFactory.getInstance().
findCorrectDatabaseImplementation(new JdbcConnection(connection))
val liquibase = new Liquibase(
"database/changelog/db.changelog-master.xml",
new FileSystemResourceAccessor(),
database
)
liquibase.update(null)
} catch {
case e: SQLException => {
connection.rollback()
throw new DatabaseException(e)
}
}
}
def boot {
// where to search snippet
LiftRules.addToPackages("code")
prepareDb()
// Build SiteMap
val entries = List(
Menu.i("Home") / "index", // the simple way to declare a menu
// more complex because this menu allows anything in the
// /static path to be visible
Menu(Loc("Static", Link(List("static"), true, "/static/index"),
"Static Content")))
// set the sitemap. Note if you don't want access control for
// each page, just comment this line out.
LiftRules.setSiteMap(SiteMap(entries: _*))
//Show the spinny image when an Ajax call starts
LiftRules.ajaxStart =
Full(() => LiftRules.jsArtifacts.show("ajax-loader").cmd)
// Make the spinny image go away when it ends
LiftRules.ajaxEnd =
Full(() => LiftRules.jsArtifacts.hide("ajax-loader").cmd)
// Force the request to be UTF-8
LiftRules.early.append(_.setCharacterEncoding("UTF-8"))
// Use HTML5 for rendering
LiftRules.htmlProperties.default.set((r: Req) =>
new Html5Properties(r.userAgent))
//Init the jQuery module, see http://liftweb.net/jquery for more information.
LiftRules.jsArtifacts = JQueryArtifacts
JQueryModule.InitParam.JQuery = JQueryModule.JQuery172
JQueryModule.init()
}
def prepareDb() {
Class.forName("com.ibm.as400.access.AS400JDBCManagedConnectionPoolDataSource")
val ds = new InitialContext().lookup("java:/comp/env/jdbc/dsliftbook").asInstanceOf[DataSource]
runChangeLog(ds)
SquerylRecord.initWithSquerylSession(
Session.create(
ds.getConnection,
new DB2Adapter)
)
}
}
This is my jetty-env-xml File
<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure.dtd">
<Configure class="org.eclipse.jetty.webapp.WebAppContext">
<New id="dsliftbook" class="org.eclipse.jetty.plus.jndi.Resource">
<Arg></Arg>
<Arg>jdbc/dsliftbook</Arg>
<Arg>
<New class="com.ibm.as400.access.AS400JDBCManagedConnectionPoolDataSource">
<Set name="serverName">"jdbc:as400://www.[server].com;libraries=PLAY2TEST";naming=system;errors=full;</Set>
<Set name="user">[user]</Set>
<Set name="password">[password]</Set>
</New>
</Arg>
</New>
</Configure>
Okay, I've managed to get connected. One problem was the quotation marks in the jetty-env-xml file. And the user name/password I was using apparently did not the authority required to make this work I'm not sure why since this is the same id/password I use for all my iSeries development. So for now, I'm another user profile with security officer authority until I can figure out what's happening or what authorities are required.
Once I got signed on, I was not able to set a library list for the user and this was causing the SQL to fail. It was looking for a library name that was the same as the user ID. For the time being, I've gotten around this issue by creating a new library named the same as the user id.
One other problem here is that even though I'm supplying both the ID and Password, I'm getting prompted to enter the ID/Password before it will connect. The ID and url are filled in but the password always has to be re-keyed.
I've included the current source for the jetty-env-xml file and the boot.scala file. Hopefully this may help others.
Thanks to Dave and James for their help!
Bob
boot.scala:
package bootstrap.liftweb
// import _root_.liquibase.database.DatabaseFactory
// import _root_.liquibase.database.jvm.JdbcConnection
// import _root_.liquibase.exception.DatabaseException
// import _root_.liquibase.Liquibase
// import _root_.liquibase.resource.FileSystemResourceAccessor
import net.liftweb._
import util._
import Helpers._
import common._
import http._
import sitemap._
import Loc._
import net.liftmodules.JQueryModule
import net.liftweb.http.js.jquery._
import net.liftweb.squerylrecord.SquerylRecord
import org.squeryl.Session
import java.sql.{SQLException, DriverManager}
import org.squeryl.adapters.DB2Adapter
import javax.naming.InitialContext
import javax.sql.DataSource
import code.model.LiftBookSchema
import com.ibm.as400.access.AS400JDBCManagedConnectionPoolDataSource
/**
* A class that's instantiated early and run. It allows the application
* to modify lift's environment
*/
class Boot {
// def runChangeLog(ds: DataSource) {
// val connection = ds.getConnection
// try {
// val database = DatabaseFactory.getInstance().
// findCorrectDatabaseImplementation(new JdbcConnection(connection))
// val liquibase = new Liquibase(
// "database/changelog/db.changelog-master.xml",
// new FileSystemResourceAccessor(),
// database
// )
// liquibase.update(null)
// } catch {
// case e: SQLException => {
// connection.rollback()
// throw new DatabaseException(e)
// }
// }
// }
def boot {
// where to search snippet
LiftRules.addToPackages("code")
prepareDb()
// Build SiteMap
val entries = List(
Menu.i("Home") / "index", // the simple way to declare a menu
// more complex because this menu allows anything in the
// /static path to be visible
Menu(Loc("Static", Link(List("static"), true, "/static/index"),
"Static Content")))
// set the sitemap. Note if you don't want access control for
// each page, just comment this line out.
LiftRules.setSiteMap(SiteMap(entries: _*))
//Show the spinny image when an Ajax call starts
LiftRules.ajaxStart =
Full(() => LiftRules.jsArtifacts.show("ajax-loader").cmd)
// Make the spinny image go away when it ends
LiftRules.ajaxEnd =
Full(() => LiftRules.jsArtifacts.hide("ajax-loader").cmd)
// Force the request to be UTF-8
LiftRules.early.append(_.setCharacterEncoding("UTF-8"))
// Use HTML5 for rendering
LiftRules.htmlProperties.default.set((r: Req) =>
new Html5Properties(r.userAgent))
//Init the jQuery module, see http://liftweb.net/jquery for more information.
LiftRules.jsArtifacts = JQueryArtifacts
JQueryModule.InitParam.JQuery = JQueryModule.JQuery172
JQueryModule.init()
}
def prepareDb() {
Class.forName("com.ibm.as400.access.AS400JDBCManagedConnectionPoolDataSource")
val ds = new InitialContext().lookup("java:/comp/env/jdbc/dsliftbook").asInstanceOf[DataSource]
// runChangeLog(ds)
SquerylRecord.initWithSquerylSession(Session.create(ds.getConnection, new DB2Adapter)
)
}
}
jetty-env-xml
<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure.dtd">
<Configure class="org.eclipse.jetty.webapp.WebAppContext">
<New id="dsliftbook" class="org.eclipse.jetty.plus.jndi.Resource">
<Arg></Arg>
<Arg>jdbc/dsliftbook</Arg>
<Arg>
<New class="com.ibm.as400.access.AS400JDBCManagedConnectionPoolDataSource">
<Set name="serverName">www.[server].com</Set>
<Set name="user">DBUSER</Set>
<Set name="password">DBUSER</Set>
</New>
</Arg>
</New>
</Configure>
I have been working on this issue for quite a while now and I cannot find a solution...
A web app built with play framework 2.2.1 using h2 db (for dev) and a simple Model package.
I am trying to implement a REST JSON endpoint and the code works... but only once per server instance.
def createOtherModel() = Action(parse.json) {
request =>
request.body \ "name" match {
case _: JsUndefined => BadRequest(Json.obj("error" -> true,
"message" -> "Could not match name =(")).as("application/json")
case name: JsValue =>
request.body \ "value" match {
case _: JsUndefined => BadRequest(Json.obj("error" -> true,
"message" -> "Could not match value =(")).as("application/json")
case value: JsValue =>
// this breaks the secod time
val session = ThinkingSession.dummy
val json = Json.obj(
"content" -> value,
"thinkingSession" -> session.id,
)
)
Ok(Json.obj("content" -> json)).as("application/json")
}
} else {
BadRequest(Json.obj("error" -> true,
"message" -> "Name was not content =(")).as("application/json")
}
}
}
so basically I read the JSON, echo the "value" value, create a model obj and send it's id.
the ThinkingSession.dummy function does this:
def all(): List[ThinkingSession] = {
// Tried explicitly closing connection, no difference
//val conn = DB.getConnection()
//try {
// DB.withConnection { implicit conn =>
// SQL("select * from thinking_session").as(ThinkingSession.DBParser *)
// }
//} finally {
// conn.close()
//}
DB.withConnection { implicit conn =>
SQL("select * from thinking_session").as(ThinkingSession.DBParser *)
}
}
def dummy: ThinkingSession = {
(all() head)
}
So this should do a SELECT * FROM thinking_session, create a model obj list from the result and return the first out of the list.
This works fine the first time after server start but the second time I get a
play.api.Application$$anon$1: Execution exception[[SQLException: Timed out waiting for a free available connection.]]
at play.api.Application$class.handleError(Application.scala:293) ~[play_2.10.jar:2.2.1]
at play.api.DefaultApplication.handleError(Application.scala:399) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$2$$anonfun$applyOrElse$3.apply(PlayDefaultUpstreamHandler.scala:261) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$2$$anonfun$applyOrElse$3.apply(PlayDefaultUpstreamHandler.scala:261) [play_2.10.jar:2.2.1]
at scala.Option.map(Option.scala:145) [scala-library.jar:na]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$2.applyOrElse(PlayDefaultUpstreamHandler.scala:261) [play_2.10.jar:2.2.1]
Caused by: java.sql.SQLException: Timed out waiting for a free available connection.
at com.jolbox.bonecp.DefaultConnectionStrategy.getConnectionInternal(DefaultConnectionStrategy.java:88) ~[bonecp.jar:na]
at com.jolbox.bonecp.AbstractConnectionStrategy.getConnection(AbstractConnectionStrategy.java:90) ~[bonecp.jar:na]
at com.jolbox.bonecp.BoneCP.getConnection(BoneCP.java:553) ~[bonecp.jar:na]
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:131) ~[bonecp.jar:na]
at play.api.db.DBApi$class.getConnection(DB.scala:67) ~[play-jdbc_2.10.jar:2.2.1]
at play.api.db.BoneCPApi.getConnection(DB.scala:276) ~[play-jdbc_2.10.jar:2.2.1]
My application.conf (db section)
db.default.driver=org.h2.Driver
db.default.url="jdbc:h2:file:database/[my_db]"
db.default.logStatements=true
db.default.idleConnectionTestPeriod=5 minutes
db.default.connectionTestStatement="SELECT 1"
db.default.maxConnectionAge=0
db.default.connectionTimeout=10000
Initially the only thing set in my config was the connection and the error occurred. I added all the other stuff while reading up on the issue on the web.
What is interesting is that when I use the h2 in memory db it works once after server start and after that it fails. when I use the h2 file system db it only works once, regardless of the server instances.
Can anyone give me some insight on this issue? Have found some stuff on bonecp problem and tried upgrading to 0.8.0-rc1 but nothing changed... I am at a loss =(
Try to set a maxConnectionAge and idle timeout
turns out the error was quite somewhere else... it was a good ol' stack overflow... have not seen one in a long time. I tried down-voting my question but it's not possible^^
I'm using Specs2 to test my Scalatra web service.
class APISpec extends ScalatraSpec {
def is = "Simple test" ^
"invalid key should return status 401" ! root401^
addServlet(new APIServlet(),"/*")
def root401 = get("/payments") {
status must_== 401
}
}
This tests the web service locally (localhost). Now I would like to perform the same tests to the production Jetty server. Ideally, I would be able to do this by only changing some URL. Is this possible at all ? Or do I have to write my own (possible duplicate) testing code for the production server?
I don't know how Scalatra manages its URLs but one thing you can do in specs2 is control parameters from the command-line:
class APISpec extends ScalatraSpec with CommandLineArguments { def is = s2"""
Simple test
invalid key should return status 401 $root401
${addServlet(new APIServlet(),s"$baseUrl/*")}
"""
def baseUrl = {
// assuming that you passed 'url www.production.com' on the command line
val args = arguments.commandLine.split(" ")
args.zip(args.drop(1)).find { case (name, value) if name == "url" => value }.
getOrElse("localhost:8080")
}
def root401 = get(s"$baseUrl/payments") {
status must_== 401
}
}