Getting execution exception for custom slick profile - postgresql

Im' getting an exception when I try to use my own custom profile with slick. The reason why I want to use it is because I want to keep JSON in my postgresql database. Therefore, I'm using pg-slick.
The exception says:
slick.jdbc.PostgresProfile$ cannot be cast to util.ExtendedPostgresProfile.
This is my code for the ExtendedPostgresProfile:
package util
import com.github.tminglei.slickpg._
trait ExtendedPostgresProfile extends ExPostgresProfile with PgPlayJsonSupport {
override val api = new API with PlayJsonImplicits
override def pgjson: String = "jsonb"
}
object ExtendedPostgresProfile extends ExtendedPostgresProfile
This is my DAO class:
class ActivityDAO #Inject()(dbConfigProvider: DatabaseConfigProvider)(implicit ec: ExecutionContext) {
private val dbConfig = dbConfigProvider.get[ExtendedPostgresProfile]
import dbConfig._
import profile.api._
private class ActivityTable(tag: Tag) extends Table[Activity](tag, "activity") {
def id: Rep[Long] = column[Long]("id", O.PrimaryKey, O.AutoInc)
def activity: Rep[JsValue] = column[JsValue]("activity")
def atTime: Rep[Timestamp] = column[Timestamp]("at_time")
def activityTypeId: Rep[Int] = column[Int]("activiry_type_id")
def userId: Rep[Long] = column[Long]("user_id")
override def * : ProvenShape[Activity] =
(id.?, activity, atTime.?, activityTypeId, userId.?) <> ((Activity.apply _).tupled, Activity.unapply)
}
private val activities = TableQuery[ActivityTable]
def add(activity: Activity): Future[Long] = {
val query = activities returning activities.map(_.id)
db.run(query += activity)
}
def filter(userId: Long): Future[Seq[Activity]] = {
db.run(activities.filter(_.userId === userId).result)
}
}
I've tried searching for the answer my self, but haven't had much luck.

Is your custom profile configured in your Play-slick configuration as suggested at the Database Configuration section? I.e. is it util.ExtendedPostgresProfile$ or is it slick.jdbc.PostgresProfile$?

Related

How to bind Slick dependency with Lagom?

So, I have this dependency which is used to create tables and interact with Postgres. Here is a Sample Class:
class ConfigTable {
this: DBFactory =>
import driver.api._
implicit val configKeyMapper = MappedColumnType.base[ConfigKey, String](e => e.toString, s => ConfigKey.withName(s))
val configs = TableQuery[ConfigMapping]
class ConfigMapping(tag: Tag) extends Table[Config](tag, "configs") {
def key = column[ConfigKey]("key")
def value = column[String]("value")
def * = (key, value) <> (Config.tupled, Config.unapply _)
}
/**
* add config
*
* #param config
* #return
*/
def add(config: Config): Try[Config] = try {
sync(db.run(configs += config)) match {
case 1 => Success(config)
case _ => Failure(new Exception("Unable to add config"))
}
} catch {
case ex: PSQLException =>
if (ex.getMessage.contains("duplicate key value")) Failure(new Exception("alt id already exists."))
else Failure(new Exception(ex.getMessage))
}
def get(key: ConfigKey): Option[Config] = sync(db.run(configs.filter(x => x.key === key).result)).headOption
def getAll(): Seq[Config] = sync(db.run(configs.result))
}
object ConfigTable extends ConfigTable with PSQLComponent
PSQLComponent is the Abstraction for Database meta configuration:
import slick.jdbc.PostgresProfile
trait PSQLComponent extends DBFactory {
val driver = PostgresProfile
import driver.api.Database
val db: Database = Database.forConfig("db.default")
}
DBFactory is again an abstraction:
import slick.jdbc.JdbcProfile
trait DBFactory {
val driver: JdbcProfile
import driver.api._
val db: Database
}
application.conf:
db.default {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://localhost:5432/db"
user = "user"
password = "pass"
hikaricp {
minimumIdle = ${db.default.async-executor.minConnections}
maximumPoolSize = ${db.default.async-executor.maxConnections}
}
}
jdbc-defaults.slick.profile = "slick.jdbc.PostgresProfile$"
lagom.persistence.jdbc.create-tables.auto=false
I compile and publish this dependency to nexus and trying to use this in my Lagom Microservice.
Here is the Loader Class:
class SlickExapleAppLoader extends LagomApplicationLoader {
override def load(context: LagomApplicationContext): LagomApplication = new SlickExampleApp(context) {
override def serviceLocator: ServiceLocator = NoServiceLocator
}
override def loadDevMode(context: LagomApplicationContext): LagomApplication = new SlickExampleApp(context) with LagomDevModeComponents {
}
override def describeService = Some(readDescriptor[SlickExampleLMSServiceImpl])
}
abstract class SlickExampleApp(context: LagomApplicationContext)
extends LagomApplication(context)
// No Idea which to use and how, nothing clear from doc too.
// with ReadSideJdbcPersistenceComponents
// with ReadSideSlickPersistenceComponents
// with SlickPersistenceComponents
with AhcWSComponents {
wire[SlickExampleScheduler]
}
I'm trying to implement it in this scheduler:
class SlickExampleScheduler #Inject()(lmsService: LMSService,
configuration: Configuration)(implicit ec: ExecutionContext) {
val brofile = `SomeDomainObject`
val gson = new Gson()
val concurrency = Runtime.getRuntime.availableProcessors() * 10
implicit val timeout: Timeout = 3.minute
implicit val system: ActorSystem = ActorSystem("LMSActorSystem")
implicit val materializer: ActorMaterializer = ActorMaterializer()
// Getting Exception Initializer here..... For ConfigTable ===> ExceptionLine
val schedulerImplDao = new SchedulerImplDao(ConfigTable)
def hitLMSAPI = {
println("=============>1")
schedulerImplDao.doSomething()
}
system.scheduler.schedule(2.seconds, 2.seconds) {
println("=============>")
hitLMSAPI
}
}
Not sure if it's the correct way, or if it's not what is the correct way of doing this. It is the project requirement to keep the Data Models separate from the service for the obvious reasons of re-usability.
Exception Stack:
17:50:38.666 [info] akka.cluster.Cluster(akka://lms-impl-application) [sourceThread=ForkJoinPool-1-worker-1, akkaTimestamp=12:20:38.665UTC, akkaSource=akka.cluster.Cluster(akka://lms-impl-application), sourceActorSystem=lms-impl-application] - Cluster Node [akka.tcp://lms-impl-application#127.0.0.1:45805] - Started up successfully
17:50:38.707 [info] akka.cluster.Cluster(akka://lms-impl-application) [sourceThread=lms-impl-application-akka.actor.default-dispatcher-6, akkaTimestamp=12:20:38.707UTC, akkaSource=akka.cluster.Cluster(akka://lms-impl-application), sourceActorSystem=lms-impl-application] - Cluster Node [akka.tcp://lms-impl-application#127.0.0.1:45805] - No seed-nodes configured, manual cluster join required
java.lang.ExceptionInInitializerError
at com.slick.init.impl.SlickExampleScheduler.<init>(SlickExampleScheduler.scala:29)
at com.slick.init.impl.SlickExampleApp.<init>(SlickExapleAppLoader.scala:42)
at com.slick.init.impl.SlickExapleAppLoader$$anon$2.<init>(SlickExapleAppLoader.scala:17)
at com.slick.init.impl.SlickExapleAppLoader.loadDevMode(SlickExapleAppLoader.scala:17)
at com.lightbend.lagom.scaladsl.server.LagomApplicationLoader.load(LagomApplicationLoader.scala:76)
at play.core.server.LagomReloadableDevServerStart$$anon$1.$anonfun$get$5(LagomReloadableDevServerStart.scala:176)
at play.utils.Threads$.withContextClassLoader(Threads.scala:21)
at play.core.server.LagomReloadableDevServerStart$$anon$1.$anonfun$get$3(LagomReloadableDevServerStart.scala:173)
at scala.Option.map(Option.scala:163)
at play.core.server.LagomReloadableDevServerStart$$anon$1.$anonfun$get$2(LagomReloadableDevServerStart.scala:149)
at scala.util.Success.flatMap(Try.scala:251)
at play.core.server.LagomReloadableDevServerStart$$anon$1.$anonfun$get$1(LagomReloadableDevServerStart.scala:147)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:658)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
Caused by: java.lang.NullPointerException
at com.example.db.models.LoginTable.<init>(LoginTable.scala:29)
at com.example.db.models.LoginTable$.<init>(LoginTable.scala:293)
at com.example.db.models.LoginTable$.<clinit>(LoginTable.scala)
... 24 more
This is how it is woking:
abstract class SlickExampleApp(context: LagomApplicationContext) extends LagomApplication(context)
with SlickPersistenceComponents with AhcWSComponents {
override implicit lazy val actorSystem: ActorSystem = ActorSystem("LMSActorSystem")
override lazy val materializer: ActorMaterializer = ActorMaterializer()
override lazy val lagomServer = serverFor[SlickExampleLMSService](wire[SlickExampleLMSServiceImpl])
lazy val externalService = serviceClient.implement[LMSService]
override def connectionPool: ConnectionPool = new HikariCPConnectionPool(environment)
override def jsonSerializerRegistry: JsonSerializerRegistry = new JsonSerializerRegistry {
override def serializers: immutable.Seq[JsonSerializer[_]] = Vector.empty
}
val loginTable = wire[LoginTable]
wire[SlickExampleScheduler]
}
> One thing I'd like to report is: Lagom docs about the application.conf configuration of slick is not correct, it misleaded me for two days, the I digged into the Liberary code and this is how it goes:
private val readSideConfig = system.settings.config.getConfig("lagom.persistence.read-side.jdbc")
private val jdbcConfig = system.settings.config.getConfig("lagom.persistence.jdbc")
private val createTables = jdbcConfig.getConfig("create-tables")
val autoCreateTables: Boolean = createTables.getBoolean("auto")
// users can disable the usage of jndiDbName for userland read-side operations by
// setting the jndiDbName to null. In which case we fallback to slick.db.
// slick.db must be defined otherwise the application will fail to start
val db = {
if (readSideConfig.hasPath("slick.jndiDbName")) {
new InitialContext()
.lookup(readSideConfig.getString("slick.jndiDbName"))
.asInstanceOf[Database]
} else if (readSideConfig.hasPath("slick.db")) {
Database.forConfig("slick.db", readSideConfig)
} else {
throw new RuntimeException("Cannot start because read-side database configuration is missing. " +
"You must define either 'lagom.persistence.read-side.jdbc.slick.jndiDbName' or 'lagom.persistence.read-side.jdbc.slick.db' in your application.conf.")
}
}
val profile = DatabaseConfig.forConfig[JdbcProfile]("slick", readSideConfig).profile
The configuration it requires is very much different than the suggested one on the Doc.

Play Slick: How to write in memory unit test cases using Play and Slick

I am using Play 2.6 and Slick 3.2 with MySql database. But the problem is that I am not able to write test cases using h2 database. The problem is that, by using guice, I am not able to inject DatabaseConfigProvider in my repo. Following is my repo class implementation:
class CompanyRepoImpl #Inject() (dbConfigProvider: DatabaseConfigProvider)
(implicit ec: ExecutionContext) extends CompanyRepo {
val logger: Logger = LoggerFactory.getLogger(this.getClass())
private val dbConfig = dbConfigProvider.get[JdbcProfile]
import dbConfig._
import profile.api._
private val company = TableQuery[Companies]
------
}
My test case class:
class CompanyRepoSpec extends AsyncWordSpec with Matchers with BeforeAndAfterAll {
private lazy val injector: Injector = new GuiceApplicationBuilder()
.overrides(bind(classOf[CompanyRepo]).to(classOf[CompanyRepoImpl]))
.in(Mode.Test)
.injector()
private lazy val repo: CompanyRepo = injector.instanceOf[CompanyRepo]
private lazy val dbApi = injector.instanceOf[DBApi]
override protected def beforeAll(): Unit = {
Evolutions.applyEvolutions(database = dbApi.database("test"))
}
override protected def afterAll(): Unit = {
Evolutions.cleanupEvolutions(database = dbApi.database("test"))
}
-----------------------
}
Testing application.test.conf
play.evolutions.db.test.enabled=true
play.evolutions.autoApply=true
slick.dbs.test.profile="slick.jdbc.H2Profile$"
slick.dbs.test.db.driver="org.h2.Driver"
slick.dbs.test.db.url="jdbc:h2:mem:test;MODE=MySQL"
May be, my configuration are not valid for testing, so, how can we write test cases using scala, slick and play.

Scala error: org.bson.codecs.configuration.CodecConfigurationException: Can't find a codec for class scala.Some

I am trying to get count of mongo query result, but I am getting error
org.bson.codecs.configuration.CodecConfigurationException: Can't find a codec for class scala.Some. Can somebody help?
This is my code:
def fetchData() = {
val mongoClient = MongoClient("mongodb://127.0.0.1")
val database = mongoClient.getDatabase("assignment")
val movieCollection = database.getCollection("movies")
val ratingCollection = database.getCollection("ratings")
val latch1 = new CountDownLatch(1)
movieCollection.find().subscribe(new Observer[Document] {
override def onError(e: Throwable): Unit = {
println("Error while fetching data")
e.printStackTrace()
}
override def onComplete(): Unit = {
latch1.countDown()
println("Completed fetching data")
}
override def onNext(movie: Document): Unit = {
if (movie.get("movieId") != null) {
ratingCollection.count(equal("movieId", movie.get("movieId"))).subscribe(new Observer[Long] {
override def onError(e: Throwable): Unit = println(s"onError: $e")
override def onNext(result: Long): Unit = { println(s"In count result : $result") }
override def onComplete(): Unit = println("onComplete")
})
}
}
})
latch1.await()
mongoClient.close()
}
I am using mongo 3.2.12 and scala -driver:
<dependency>
<groupId>org.mongodb.scala</groupId>
<artifactId>mongo-scala-driver_2.11</artifactId>
<version>2.1.0</version>
</dependency>
Use the code in this answer, and then add that codec to your codec registry. First, add
import org.bson.codecs.configuration.CodecRegistries.fromCodecs
You might already have other imports from that package already; for example, if you're using both providers, registries and codecs:
import org.bson.codecs.configuration.CodecRegistries.{fromRegistries, fromProviders, fromCodecs}
Just make sure you have everything you need imported.
Then:
val codecRegistry = fromRegistries(/* ..., */ fromCodecs(new SomeCodec()), DEFAULT_CODEC_REGISTRY)
val mongoClient = MongoClient("mongodb://127.0.0.1")
val database = mongoClient.getDatabase("assignment").withCodecRegistry(codecRegistry)
This answer is a little bit old, after losing many hours solving the same issue I write an update to it
Using Macros it's much easier now:
import org.mongodb.scala.bson.codecs._
val movieCodecProvider: CodecProvider = Macros.createCodecProviderIgnoreNone[Movie]()
val codecRegistry: CodecRegistry = fromRegistries(fromProviders(movieCodecProvider), DEFAULT_CODEC_REGISTRY)
val movieCollection: MongoCollection[Movie] = mongo.database.withCodecRegistry(codecRegistry).getCollection("movie_collection")
pay attention when you write "manual" query (i.e. query in which you are not parsing an entire Movie object, like an update) you have to handle the Some field like a plain object
so to set it to None you do
movieCollection.updateOne(
equal("_id", movie._id),
unset("foo")
)
to set it to Some
movieCollection.updateOne(
equal("_id", movie._id),
set("foo","some_value")
)
Please make sure all fields are transformed into Strings. Especially enums, where you want the field to be inserted as <your-enum>.map(_.toString).
The code that causes the exception is this
ratingCollection.count(equal("movieId", movie.get("movieId")))
Specifically movie.get(...) which has return type Option[BsonValue]. You cannot query collections with Option[T] values. Since you already checked against null, you could change the code to movie.get("movieId").get but the scala approach would be to utilize pattern matching, something akin to this.
override def onNext(movie: Document): Unit = {
movie.get("movieId") match {
case Some(movieId: BsonValue32) =>
ratingCollection.count(equal("movieId", movieId)).subscribe(new Observer[Long] {
override def onError(e: Throwable): Unit = println(s"onError: $e")
override def onNext(result: Long): Unit = { println(s"In count result : $result") }
override def onComplete(): Unit = println("onComplete")
})
case invalidId =>
println(s"invalid id ${invalidId}")
}
}
The underlying issue is how the mongo scala driver handles Option[T] monads. It's not well documented. One of the answers already provided to this question already shows how to solve this issue with querying case classes like Foo(bar: Option[BsonValue]) but be aware that it fails for other case classes such as Foo(bar: Seq[Option[BsonValue]]).
As mentioned in the answer I refer to, the createCodecProviderIgnoreNone and related codec providers only applies to full document queries, like insert, findReplace etc. When doing field operation queries you have to unpack the Option yourself. I prefer to do this using pattern matching such as shown in my example.
This works for me using the versions below:
scalaVersion := "2.13.1"
sbt.version = 1.3.8
import org.mongodb.scala.bson.ObjectId
object Person {
def apply(firstName: String, lastName: String): Person =
Person(new ObjectId(), firstName, lastName)
}
case class Person(_id: ObjectId, firstName: String, lastName: String)
import models.Person
import org.mongodb.scala.{Completed, MongoClient, MongoCollection, MongoDatabase, Observer}
import org.mongodb.scala.bson.codecs.Macros._
import org.mongodb.scala.bson.codecs.DEFAULT_CODEC_REGISTRY
import org.bson.codecs.configuration.CodecRegistries.{fromRegistries, fromProviders}
object PersonMain extends App {
val codecRegistry = fromRegistries(fromProviders(classOf[Person]), DEFAULT_CODEC_REGISTRY )
val mongoClient: MongoClient = MongoClient("mongodb://localhost")
val database: MongoDatabase = mongoClient.getDatabase("mydb").withCodecRegistry(codecRegistry)
val collection: MongoCollection[Person] = database.getCollection("people")
def addDocument(doc: Person) = {
collection.insertOne(doc)
.subscribe(new Observer[Completed] {
override def onNext(result: Completed): Unit = println(s"Inserted $doc")
override def onError(e: Throwable): Unit = println(s"Failed $e")
override def onComplete(): Unit = println(s"Completed inserting $doc")
})
}
addDocument(Person("name", "surname"))
mongoClient.close()
}

Close or shutdown of H2 database after tests is not working

I am facing a problem of database clean-up after each test when using scalatest with Slick.
Here is code of the test:
class H2DatabaseSpec extends WordSpec with Matchers with ScalaFutures with BeforeAndAfter {
implicit override val patienceConfig = PatienceConfig(timeout = Span(5, Seconds))
val h2DB: H2DatabaseService = new H2DatabaseService
var db: Database = _
before {
db = Database.forConfig("h2mem1")
h2DB.createSchema.futureValue
}
after {
db.shutdown.futureValue
}
"H2 database" should {
"query a question" in {
val newQuestion: QuestionEntity = QuestionEntity(Some(1L), "First question")
h2DB.insertQuestion(newQuestion).futureValue
val question = h2DB.getQuestionById(1L)
question.futureValue.get shouldBe newQuestion
}
}
it should {
"query all questions" in {
val newQuestion: QuestionEntity = QuestionEntity(Some(2L), "Second question")
h2DB.insertQuestion(newQuestion).futureValue
val questions = h2DB.getQuestions
questions.futureValue.size shouldBe 1
}
}
}
Database service is just invoking run method on defined database:
class H2DatabaseService {
val db = Database.forConfig("h2mem1")
val questions = TableQuery[Question]
def createSchema =
db.run(questions.schema.create)
def getQuestionById(id: Long): Future[Option[QuestionEntity]] =
db.run(questions.filter(_.id === id).result.headOption)
def getQuestions: Future[Seq[QuestionEntity]] =
db.run(questions.result)
def insertQuestion(question: QuestionEntity): Future[Int] =
db.run(questions += question)
}
class Question(tag: Tag) extends Table[QuestionEntity](tag, "QUESTION") {
def id = column[Option[Long]]("QUESTION_ID", O.PrimaryKey, O.AutoInc)
def title = column[String]("TITLE")
def * = (id, title) <> ((QuestionEntity.apply _).tupled, QuestionEntity.unapply)
}
case class QuestionEntity(id: Option[Long] = None, title: String) {
require(!title.isEmpty, "title.empty")
}
And the database is defined as follows:
h2mem1 = {
url = "jdbc:h2:mem:test1"
driver = org.h2.Driver
connectionPool = disabled
keepAliveConnection = true
}
I am using Scala 2.11.8, Slick 3.1.1, H2 database 1.4.192 and scalatest 2.2.6.
Error that appears when tests are executed is Table "QUESTION" already exists. So it looks like shutdown() method has no effect at all (but it is invoked - checked in debugger).
Anybody knows how to handle such scenario? How to clean-up database properly after each test?
Database has not been correctly cleaned-up because of invoking the method on different object.
H2DatabaseService has it's own db object and the test it's own. Issue was fixed after refactoring H2 database service and invoking:
after {
h2DB.db.shutdown.futureValue
}

Play Framework and Slick: testing database-related services

I'm trying to follow the most idiomatic way to having a few fully tested DAO services.
I've got a few case classes such as the following:
case class Person (
id :Int,
firstName :String,
lastName :String
)
case class Car (
id :Int,
brand :String,
model :String
)
Then I've got a simple DAO class like this:
class ADao #Inject()(protected val dbConfigProvider: DatabaseConfigProvider) extends HasDatabaseConfigProvider[JdbcProfile] {
import driver.api._
private val persons = TableQuery[PersonTable]
private val cars = TableQuery[CarTable]
private val personCar = TableQuery[PersonCar]
class PersonTable(tag: Tag) extends Table[Person](tag, "person") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def firstName = column[String]("name")
def lastName = column[String]("description")
def * = (id, firstName, lastName) <> (Person.tupled, Person.unapply)
}
class CarTable(tag: Tag) extends Table[Car](tag, "car") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def brand = column[String]("brand")
def model = column[String]("model")
def * = (id, brand, model) <> (Car.tupled, Car.unapply)
}
// relationship
class PersonCar(tag: Tag) extends Table[(Int, Int)](tag, "person_car") {
def carId = column[Int]("c_id")
def personId = column[Int]("p_id")
def * = (carId, personId)
}
// simple function that I want to test
def getAll(): Future[Seq[((Person, (Int, Int)), Car)]] = db.run(
persons
.join(personCar).on(_.id === _.personId)
.join(cars).on(_._2.carId === _.id)
.result
)
}
And my application.conf looks like:
slick.dbs.default.driver="slick.driver.PostgresDriver$"
slick.dbs.default.db.driver="org.postgresql.Driver"
slick.dbs.default.db.url="jdbc:postgresql://super-secrete-prod-host/my-awesome-db"
slick.dbs.default.db.user="myself"
slick.dbs.default.db.password="yolo"
Now by going through Testing with databases and trying to mimic play-slick sample project
I'm getting into so much trouble and I cannot seem to understand how to make my test use a different database (I suppose I need to add a different db on my conf file, say slick.dbs.test) but then I couldn't find out how to inject that inside the test.
Also, on the sample repo, there's some "magic" like Application.instanceCache[CatDAO] or app2dao(app).
Can anybody point me at some full fledged example of or repo that deals correctly with testing play and slick?
Thanks.
I agree it's confusing. I don't know if this is the best solution, but I ended up having a separate configuration file test.conf that specifies an in-memory database:
slick.dbs {
default {
driver = "slick.driver.H2Driver$"
db.driver = "org.h2.Driver"
db.url = "jdbc:h2:mem:play-test"
}
}
and then told sbt to use this when running the tests:
[..] javaOptions in Test ++= Seq("-Dconfig.file=conf/test.conf")