I am trying to use the current version of slick and slick-codegen (3.2.0) with a sqlite database. When I try listing the table, I get the names properly. However, when I try to generate classes corresponding to the tables, I do not get any output.
This works:
object TableCodeGenerator extends App
{
val db = Database.forURL("jdbc:sqlite:/home/samik/db/mydb.db", driver = "org.sqlite.JDBC")
val tables = Await.result(db.run(MTable.getTables), 1 second).toList
tables.foreach(println)
}
I get the output below:
MTable(MQName(models),TABLE,null,None,None,None)
MTable(MQName(users),TABLE,null,None,None,None)
However, the following code, run directly in the same way, doesn't work:
object TableCodeGenerator extends App
{
val db = Database.forURL("jdbc:sqlite:/home/samik/db/mydb.db", driver = "org.sqlite.JDBC")
val dbio = SQLiteProfile.createModel(Some(MTable.getTables))
val model = db.run(dbio)
val codegenFuture: Future[SourceCodeGenerator] = model.map(model => new SourceCodeGenerator(model))
codegenFuture.onSuccess
{
case codegen => codegen.writeToFile(
"org.sqlite.JDBC",
"/tmp",
"my.package.dao",
"Tables",
"Tables.scala")
}
}
Meaning, the code runs successfully, but I do not see any output file. Is there anything I am missing?
The above was happening because the underlying code was silently throwing an exception. The reason for this exception was that I was using a "feature" of sqlite where if you don't mention the datatype in schema, sqlite assumes it to be text type. However that creates a problem for the slick code.
More details here. The immediate solution was to fix the schema, but I think this has now been fixed in slick as well.
Related
I am writing unit tests for my play app. Some data in app is rarely changed needs to be accessed frequently. So, I am fetching it in start and storing it in a hashmap.
var inMemQuestion:Map[Long,QuestionSchema.Question] = {
println("Getting all questions from DB.....")
Await.result(listAll, Duration.Inf).map(a => a.id->a).toMap
}
In my methods I'm using data from here instead of db.
But when I try to mock this for unit testing my methods like this:
val sampleQsn = QuestionSchema.Question(9.toLong,"1 or 2",7.toLong,4.toLong,Some(1.0),Some(false),new java.sql.Timestamp(1599545742),new java.sql.Timestamp(1599552379))
val questionsRepo = mock[QuestionRepository]
when(questionsRepo.inMemQuestion(9)).thenReturn( // null pointer error in this line
sampleQsn
)
I'm getting null pointer error. Can anyone help me in resolving that?
My goal here is to retrieve the Board entity upon insert. If the entity exists then I just want to return the existing object (which coincides with the argument of the add method). Otherwise I'd like to return the new row inserted in the database.
I am using Play 2.7 with Slick 3.2 and MySQL 5.7.
The implementation is based on this answer which is more than insightful.
Also from Essential Slick
exec(messages returning messages +=
Message("Dave", "So... what do we do now?"))
DAO code
#Singleton
class SlickDao #Inject()(db: Database,implicit val playDefaultContext: ExecutionContext) extends MyDao {
override def add(board: Board): Future[Board] = {
val insert = Boards
.filter(b => b.id === board.id && ).exists.result.flatMap { exists =>
if (!exists) Boards returning Boards += board
else DBIO.successful(board) // no-op - return specified board
}.transactionally
db.run(insert)
}
EDIT: also tried replacing the += part with
Boards returning Boards.map(_.id) into { (b, boardId) => sb.copy(id = boardId) } += board
and this does not work either
The table definition is the following:
object Board {
val Boards: TableQuery[BoardTable] = TableQuery[BoardTable]
class BoardTable(tag: Tag) extends Table[BoardRow](tag, "BOARDS") {
// columns
def id = column[String]("ID", O.Length(128))
def x = column[String]("X")
def y = column[Option[Int]]("Y")
// foreign key definitions
.....
// primary key definitions
def pk = primaryKey("PK_BOARDS", (id,y))
// default projection
def * = (boardId, x, y).mapTo[BoardRow]
}
}
I would expect that there would e a new row in the table but although the exists query gets executed
select exists(select `ID`, `X`, `Y`
from `BOARDS`
where ((`ID` = '92f10c23-2087-409a-9c4f-eb2d4d6c841f'));
and the result is false there is no insert.
There is neither any logging in the database that any insert statements are received (I am referring to the general_log file)
So first of all the problem for the query execution was a mishandling of the futures that the DAO produced. I was assigning the insert statement to a future but this future was never submitted to an execution context. My bad even more so that I did not mention it in the description of the problem.
But when this was actually fixed I could see the actual error in the logs of my application. The stack trace was the following:
slick.SlickException: This DBMS allows only a single column to be returned from an INSERT, and that column must be an AutoInc column.
at slick.jdbc.JdbcStatementBuilderComponent$JdbcCompiledInsert.buildReturnColumns(JdbcStatementBuilderComponent.scala:67)
at slick.jdbc.JdbcActionComponent$ReturningInsertActionComposerImpl.x$17$lzycompute(JdbcActionComponent.scala:659)
at slick.jdbc.JdbcActionComponent$ReturningInsertActionComposerImpl.x$17(JdbcActionComponent.scala:659)
at slick.jdbc.JdbcActionComponent$ReturningInsertActionComposerImpl.keyColumns$lzycompute(JdbcActionComponent.scala:659)
at slick.jdbc.JdbcActionComponent$ReturningInsertActionComposerImpl.keyColumns(JdbcActionComponent.scala:659)
So this is a MySQL thing in its core. I had to redesign my schema in order to make this retrieval after insert possible. This redesign includes an introduction of a dedicated primary key (completely unrelated to the business logic) which is also an AutoInc column as the stack trace prescribes.
In the end the solution becomes too involved and instead decided to use the actual argument of the add method to return if the insert was actually successful. So the implementation of the add method ended up being something like this
override def add(board: Board): Future[Board] = {
db.run(Boards.insertOrUpdate(board).map(_ => board))
}
while there was some appropriate Future error handling in the controller which was invoking the underlying repo.
If you're lucky enough and not using MySQL with Slick I suppose you might have been able to do this without a dedicated AutoInc primary key. If not then I suppose this is a one way road.
I'm having some issues with a Wicket (8.0.0-M4) NumberTextField in Kotlin (1.1.0).
My stripped-down form looks like this:
class Test : AbstractWebPage() {
val housenumberModel: Model<Int> = Model<Int>()
val housenumber = NumberTextField<Int>("housenumberModel", housenumberModel)
val form: Form<Unit> = object : Form<Unit>("adressForm") {}
override fun onInitialize() {
super.onInitialize()
form.add(housenumber.setRequired(false))
form.add(object : SubmitLink("submit") {
override fun onSubmit() {
super.onSubmit()
println(housenumberModel.`object`) // this is line 28
}
})
add(form)
}
}
After submitting the form I get the following stacktrace:
java.lang.ClassCastException: java.lang.String cannot be cast to
java.lang.Number
at com.mycompany.test.pages.Test$onInitialize$1.onSubmit(Test.kt:28)
at org.apache.wicket.markup.html.form.Form.delegateSubmit(Form.java:1312)
at org.apache.wicket.markup.html.form.Form.process(Form.java:979)
at org.apache.wicket.markup.html.form.Form.onFormSubmitted(Form.java:802)
at org.apache.wicket.markup.html.form.Form.onRequest(Form.java:715)
at org.apache.wicket.core.request.handler.ListenerRequestHandler.internalInvoke(ListenerRequestHandler.java:301)
at org.apache.wicket.core.request.handler.ListenerRequestHandler.invoke(ListenerRequestHandler.java:250)
at org.apache.wicket.core.request.handler.ListenerRequestHandler.invokeListener(ListenerRequestHandler.java:210)
at org.apache.wicket.core.request.handler.ListenerRequestHandler.respond(ListenerRequestHandler.java:203)
at org.apache.wicket.request.cycle.RequestCycle$HandlerExecutor.respond(RequestCycle.java:912)
at org.apache.wicket.request.RequestHandlerExecutor.execute(RequestHandlerExecutor.java:65)
at org.apache.wicket.request.cycle.RequestCycle.execute(RequestCycle.java:283)
at org.apache.wicket.request.cycle.RequestCycle.processRequest(RequestCycle.java:253)
at org.apache.wicket.request.cycle.RequestCycle.processRequestAndDetach(RequestCycle.java:221)
at org.apache.wicket.protocol.http.WicketFilter.processRequestCycle(WicketFilter.java:262)
at org.apache.wicket.protocol.http.WicketFilter.processRequest(WicketFilter.java:204)
at org.apache.wicket.protocol.http.WicketFilter.doFilter(WicketFilter.java:286)
[...]
If I use
val housenumberModel: Model<Int> = Model.of(0)
instead of
val housenumberModel: Model<Int> = Model<Int>()
everything works fine. But since my NumberTextField is optional I don't want to have it pre-initialized with 0.
Me and my colleagues were trying to change the type signature of the Model in every way we could imagine but came to no solution. A co-worker suggested to write a custom Wicket converter since Kotlins Int is represendeted as a primitive type (From the docs: "On the JVM, non-nullable values of this type are represented as values of the primitive type int.") Even though I don't know yet if this would work it seems like an overkill for me.
Another hack I could think of: writing some JavaScript to delete the zero from the input field. Also not really something I would want to do.
Question: Is there a simple solution to my problem?
(And as a bonus-question: has already anyone written a larger Wicket application in Kotlin and could tell me if this combination is ready for prime time to develop a critical project with this stack or is my problem just the tip of the iceberg?)
[edit]
Solution as pointed out by svenmeier:
Using
val housenumber = NumberTextField<Int>("housenumberModel", housenumberModel, Int::class.java)
works.
Or as an alternative:
val housenumbervalue: Int? = null
val housenumberModel: IModel<Int> = PropertyModel<Int>(this, "housenumbervalue")
val housenumber = NumberTextField<Int>("housenumberModel", housenumberModel)
Because of type erasure your NumberTextField cannot detect the generic type parameter of your model. Since your model object is null, it cannot be used to derive the type either.
In this case Wicket assumes a String model object type :/.
Either provide the type to the NumberTextField explicitly, or use a model that keeps its generic information, e.g. a PropertyModel.
There is a way to tell wicket about the type you want, it is by adding the type in the constructor. More here.
In Java it looks like this:
new NumberTextField<Integer>("housenumberModel", housenumberModel, Integer.class);
One question I have about current Scala couchdb drivers is whether they can work with "partial" schemas". I'll try to explain what I mean: the libraries I've see seem to all want to do a complete conversion from JSON docs in the database to a Scala object, handle the Scala object, and convert it back to JSON. This is is fine if your application knows everything about that type of object --- especially if it is the sole piece of software interacting with that database. However, what if I want to write a little application that only knows about part of the JSON object: for example, what if I'm only interested in a 'mybook' component embedded like this:
{
_id: "0ea56a7ec317138700743cdb740f555a",
_rev: "2-3e15c3acfc3936abf10ea4f84a0aeced",
type: "user",
profiles: {
mybook: {
key: "AGW45HWH",
secret: "g4juh43ui9hg929gk4"
},
.. 6 or 7 other profiles
},
.. lots of other stuff
}
I really don't want to convert the whole JSON AST to a Scala object. On the other hand, in couchdb, you must save back the entire JSON doc, so this needs to be preserved somehow. I think what I really what is something like this:
class MyBook {
private val userJson: JObject = ... // full JSON retrieved from the database
lazy val _id: String = ... // parsed from the JSON
lazy val _rev: String = ... // parsed from the JSON
lazy val key: String = ... // parsed from the JSON
lazy val secret: String = ... // (ditto)
def withSecret(secret: String): MyBook = ... // new object with altered userJson
def save(db: CouchDB) = ... // save userJson back to couchdb
}
Advantages:
computationally cheaper to extract only needed fields
don't have to sync with database evolution except for 'mybook' part
more suitable for development with partial schemas
safer, because there is less change as inadvertently deleting fields if we didn't keep up with the database schema
Disadavantages:
domain objects in Scala are not pristinely independent of couch/JSON
more memory use per object
Is this possible with any of the current Scala drivers? With either of scouchdb or the new Sohva library, it seems not.
As long as you have a good JSON library and a good HTTP client library, implementing a schemaless CouchDB client library is really easy.
Here is an example in Java: code, tests.
My couchDB library uses spray-json for (de)serialization, which is very flexible and would enable you to ignore parts of a document but still save it. Let's look at a simplified example:
Say we have a document like this
{
dontcare: {
...
},
important: "foo"
}
Then you could declare a class to hold information from this document and define how the conversion is done:
case class Dummy(js:JsValue)
case class PartialDoc(dontcare: Dummy, important: String)
implicit object DummyFormat extends JsonFormat[Dummy] {
override def read(js:JsValue):Dummy = Dummy(js)
override def write(d:Dummy):JsValue = d.js
}
implicit val productFormat = jsonFormat2(PartialDoc)
This will ignore anything in dontcare but still safe it as a raw JSON AST. Of course this example is not as complex as the one in your question, but it should give you an idea how to solve your problem.
I'm writing a web-app using Play 2, Salat (for mongoDB bindin). I would like to test some methods, in the Lesson Model (for instance test the fact that I retrieve the right lesson by id). The problem is that I don't want to pollute my current DB with dummy lessons. How can I use a fake DB using Salat and Scala Test ? Here is one of my test file. It creates two lessons, and insert it into the DB, and it runs some tests on it.
LessonSpec extends FlatSpec with ShouldMatchers {
object FakeApp extends FakeApplication()
val newLesson1 = Lesson(
title = "lesson1",
level = 5,
explanations = "expl1",
questions = Seq.empty)
LessonDAO.insert(newLesson1)
val newLesson2 = Lesson(
title = "lesson2",
level = 5,
explanations = "expl2",
questions = Seq.empty)
LessonDAO.insert(newLesson2)
"Lesson Model" should "be retrieved by level" in {
running(FakeApp) {
assert(Lesson.findByLevel(5).size === 2)
}
}
it should "be of size 0 if no lesson of the level is found" in {
running(FakeApp) {
Lesson.findByLevel(4) should be(Nil)
}
}
it should "be retrieved by title" in {
running(FakeApp) {
Lesson.findOneByTitle("lesson1") should be(Some(Lesson("lesson1", 5, "expl1", List())))
}
}
}
I searched on the web but i can't find a good link or project that use Salat and ScalaTest.
Salat developer here. My recommendation would be to have a separate test only database. You can populate it with test data to put your test database in a known state - see the casbah tests for how to do this - and then test against it however you like, clearing out collections as necessary.
I use specs2, not scalatest, but the principle is the same - see the source code for the Salat tests.
Here's a good test to get you started:
https://github.com/novus/salat/blob/master/salat-core/src/test/scala/com/novus/salat/test/dao/SalatDAOSpec.scala
Note that in my base spec I clear out my test database - this gets run before each spec:
trait SalatSpec extends Specification with Logging {
override def is =
Step {
// log.info("beforeSpec: registering BSON conversion helpers")
com.mongodb.casbah.commons.conversions.scala.RegisterConversionHelpers()
com.mongodb.casbah.commons.conversions.scala.RegisterJodaTimeConversionHelpers()
} ^
super.is ^
Step {
// log.info("afterSpec: dropping test MongoDB '%s'".format(SalatSpecDb))
MongoConnection().dropDatabase(SalatSpecDb)
}
And then in SalatDAOSpec, I run my tests inside scopes which create, populate and/or clear out individual collections so that the tests can run in an expected state. One hitch: if you run your tests concurrently on the same collection, they may fail due to unexpected state. The solution is either to run your tests in isolated special purpose collections, or to force your tests to run in series so that operations on a single collection don't step on each other as different test cases modify the collection.
If you post to the Scalatest mailing list (http://groups.google.com/group/scalatest-users), I'm sure someone can recommend the correct way to set this up.
In my applications, I use a parameter in application.conf to specify the Mongo database name. When initializing my FakeApplication, I override that parameter so that my unit tests can use a real Mongo instance but do not see any of my production data.
Glossing over a few details specific to my application, my tests look something like this:
// wipe any existing data
db.collectionNames.foreach { colName =>
if (colName != "system.indexes") db.getCollection(colName).drop
}
app = FakeApplication(additionalConfiguration = Map("mongo.db.name" -> "unit-test"))