Playframework Scala play-slick3-example project: JdbcSQLException: Column "TEST" not found - scala

I downloaded a starter application with Play Framework 2.5 and Slick 3.1, git here.
When I add a simple to column named "test" to Project.scala. I get this error:
[JdbcSQLException: Column "TEST" not found; SQL statement:
select "ID", "NAME", "TEST" from "PROJECT" [42122-187]]
I just change the Project case class by adding the argument test and ProjectsTable class with functions test, * and ?:
case class Project(id: Long, name: String, test: String)
private class ProjectsTable(tag: Tag) extends Table[Project](tag, "PROJECT") {
def id = column[Long]("ID", O.AutoInc, O.PrimaryKey)
def name = column[String]("NAME")
def test = column[String]("TEST")
def * = (id, name, test) <> (Project.tupled, Project.unapply)
def ? = (id.?, name.?, test.?).shaped.<>({ r => import r._; _1.map(_ => Project.tupled((_1.get, _2.get, _3.get))) }, (_: Any) => throw new Exception("Inserting into ? projection not supported."))
}
And the function: create
def create(name: String): Future[Long] = {
val project = Project(0, name, "d")
db.run(Projects returning Projects.map(_.id) += project)
}
Thank you very much for your help!

By adding the field test to the ProjectsTable model you are telling Slick that the database table PROJECT has a column named test - but the table PROJECT does not have a column named test.
You can see the SQL schema here: https://github.com/nemoo/play-slick3-example/blob/6c122970d7506bedb9230e3a31c30a5c7e27e93b/conf/evolutions/default/1.sql

Related

Scala Slick - Insert in table with omitting some columns and returning primary key of new line

For a scala project, I'm using play-slick with play-slick-evolutions both version 5.0.0 and I generate my db classes with slick-codegen version 3.3.3.
I have a table with a primary key column and some columns with default values. I want to insert one row without mentioning the primary key column nor any columns with default values. Ideally, this action should return the new primary key of the created row.
My problem is that the generated code from slick-codegen seems to only allow to insert full rows because it uses an own case class for the rows. This is how the generated code looks (without the comments):
case class SalesOrderRow(idSalesOrder: Int, fkCustomer: Int, createdAt: java.sql.Timestamp, createdBy: Option[String] = None)
implicit def GetResultSalesOrderRow(implicit e0: GR[Int], e1: GR[java.sql.Timestamp], e2: GR[Option[String]]): GR[SalesOrderRow] = GR{
prs => import prs._
SalesOrderRow.tupled((<<[Int], <<[Int], <<[java.sql.Timestamp], <<?[String]))
}
class SalesOrder(_tableTag: Tag) extends profile.api.Table[SalesOrderRow](_tableTag, Some("test"), "sales_order") {
def * = (idSalesOrder, fkCustomer, createdAt, createdBy) <> (SalesOrderRow.tupled, SalesOrderRow.unapply)
def ? = ((Rep.Some(idSalesOrder), Rep.Some(fkCustomer), Rep.Some(createdAt), createdBy)).shaped.<>({r=>import r._; _1.map(_=> SalesOrderRow.tupled((_1.get, _2.get, _3.get, _4)))}, (_:Any) => throw new Exception("Inserting into ? projection not supported."))
val idSalesOrder: Rep[Int] = column[Int]("id_sales_order", O.AutoInc, O.PrimaryKey)
val fkCustomer: Rep[Int] = column[Int]("fk_customer")
val createdAt: Rep[java.sql.Timestamp] = column[java.sql.Timestamp]("created_at")
val createdBy: Rep[Option[String]] = column[Option[String]]("created_by", O.Length(20,varying=true), O.Default(None))
}
lazy val SalesOrder = new TableQuery(tag => new SalesOrder(tag))
With this generated code I could now insert a row with mentioning the full column:
val insertActionsNotSoNice =
DBIO.seq(
salesOrders += Tables.SalesOrderRow(0, 3, new Timestamp(System.currentTimeMillis()), Some("这个不好"))
)
But I want to omit the primary key at the beginning and the timestamp parameter that has a default value. But can't do something like this.
val insertActionsNotCompiling =
DBIO.seq(
salesOrders.map(so => (so.fkCustomer, so.createdBy) += (3, Some("这个好")))
)
I found many examples with something like the latter approach in the slick documentation and in the web but it was always the case that their classes used for the database did have a tuple instead of an own row class. Like...
class Coffees(tag: Tag) extends Table[(String, Int, Double, Int, Int)](tag, "COFFEES")
instead of
class Coffees(_tableTag: Tag) extends profile.api.Table[CoffeesRow](_tableTag, Some("test"), "coffee")
Do I have to throw the slick-codegen out of my project and write all the classes myself to fit my needs or is my composition of libraries with play-slick wrong? Or is there a simple trick to omit columns when inserting what isn't documented?
After some days of trying several things, I found one solution.
You can map over a query to select only the columns you want to instert and then use the returning if you want to return the id of the inserted row.
import dao.Tables.profile.api._
// ...
val salesOrders = TableQuery[SalesOrder]
val insertStatement = salesOrders.map(so => (so.fkCustomer, so.createdBy)) returning salesOrders.map(_.idSalesOrder) into ((_, id) => id)
Then you can write something like:
val newSalesOrderIdFuture = db.run(insertStatement += (5, Some("有效")))
So I omitted the primary key column (id_sales_order) and a timestamp column that defaults to now() in the database (created_at) in my insert statement.

Scala Slick extend the functionality of a table outside of its structure

I'm using Slick code-gen to output a very normal Tables.scala file, which maps the tables/columns of the structure of my database.
However i want to EXTEND the functionality of those tables in my DAOs and its proving to be impossible for me (roughly new to scala and play framework)
INSIDE Tables.scala INSIDE the class Meeting
you can write functions that have access to the columns I.E
class Meeting(_tableTag: Tag) extends profile.api.Table[MeetingRow](_tableTag, "meeting") {
def * = (id, dateandtime, endtime, organisationid, details, adminid, datecreated, title, agenda, meetingroom) <> (MeetingRow.tupled, MeetingRow.unapply)
def ? = (Rep.Some(id), Rep.Some(dateandtime), Rep.Some(endtime), Rep.Some(organisationid), Rep.Some(details), Rep.Some(adminid), Rep.Some(datecreated), Rep.Some(title), Rep.Some(agenda), Rep.Some(meetingroom)).shaped.<>({r=>import r._; _1.map(_=> MeetingRow.tupled((_1.get, _2.get, _3.get, _4.get, _5.get, _6.get, _7.get, _8.get, _9.get, _10.get)))}, (_:Any) => throw new Exception("Inserting into ? projection not supported."))
val id: Rep[Int] = column[Int]("id", O.AutoInc, O.PrimaryKey)
val dateandtime: Rep[java.sql.Timestamp] = column[java.sql.Timestamp]("dateandtime")
val endtime: Rep[java.sql.Timestamp] = column[java.sql.Timestamp]("endtime")
val organisationid: Rep[Int] = column[Int]("organisationid")
val details: Rep[String] = column[String]("details", O.Default(""))
val adminid: Rep[Int] = column[Int]("adminid")
val datecreated: Rep[java.sql.Timestamp] = column[java.sql.Timestamp]("datecreated")
val title: Rep[String] = column[String]("title", O.Default(""))
val agenda: Rep[String] = column[String]("agenda", O.Default(""))
val meetingroom: Rep[Int] = column[Int]("meetingroom")
def getAttendees = Tables.Meeting2uzer.filter(_.meetingid === id)
where "ID" in the above function is a column in Meeting.
now the problem arises when i want to write that same function "getAttendees" in my DAO which doesn't have access to the columns in scope.
something along the lines of....
#Singleton
class SlickMeetingDAO #Inject()(db: Database)(implicit ec: ExecutionContext) extends MeetingDAO with Tables {
override val profile: JdbcProfile = _root_.slick.jdbc.PostgresProfile
import profile.api._
private val queryById = Compiled((id: Rep[Int]) => Meeting.filter(_.id === id))
def getAttendees = Meeting2uzer.filter(_.meetingid === "NEED ID OF COLUMN IN SCOPE")
How do i get 'id' which is a column in Tables.Meeting in scope in my DAO to finish this getAttendees function.
If I understand correctly, you're trying to join two tables?
Meeting2uzer.join(Meeting).on(_.meetingid === _.id)
What you've done inside the Tables.scala, would be more inline with creating a foreign key in Slick. You could create a slick foreign key, instead of using an explicit join. See the documentation for Slick here.

Play Framework and Slick: testing database-related services

I'm trying to follow the most idiomatic way to having a few fully tested DAO services.
I've got a few case classes such as the following:
case class Person (
id :Int,
firstName :String,
lastName :String
)
case class Car (
id :Int,
brand :String,
model :String
)
Then I've got a simple DAO class like this:
class ADao #Inject()(protected val dbConfigProvider: DatabaseConfigProvider) extends HasDatabaseConfigProvider[JdbcProfile] {
import driver.api._
private val persons = TableQuery[PersonTable]
private val cars = TableQuery[CarTable]
private val personCar = TableQuery[PersonCar]
class PersonTable(tag: Tag) extends Table[Person](tag, "person") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def firstName = column[String]("name")
def lastName = column[String]("description")
def * = (id, firstName, lastName) <> (Person.tupled, Person.unapply)
}
class CarTable(tag: Tag) extends Table[Car](tag, "car") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def brand = column[String]("brand")
def model = column[String]("model")
def * = (id, brand, model) <> (Car.tupled, Car.unapply)
}
// relationship
class PersonCar(tag: Tag) extends Table[(Int, Int)](tag, "person_car") {
def carId = column[Int]("c_id")
def personId = column[Int]("p_id")
def * = (carId, personId)
}
// simple function that I want to test
def getAll(): Future[Seq[((Person, (Int, Int)), Car)]] = db.run(
persons
.join(personCar).on(_.id === _.personId)
.join(cars).on(_._2.carId === _.id)
.result
)
}
And my application.conf looks like:
slick.dbs.default.driver="slick.driver.PostgresDriver$"
slick.dbs.default.db.driver="org.postgresql.Driver"
slick.dbs.default.db.url="jdbc:postgresql://super-secrete-prod-host/my-awesome-db"
slick.dbs.default.db.user="myself"
slick.dbs.default.db.password="yolo"
Now by going through Testing with databases and trying to mimic play-slick sample project
I'm getting into so much trouble and I cannot seem to understand how to make my test use a different database (I suppose I need to add a different db on my conf file, say slick.dbs.test) but then I couldn't find out how to inject that inside the test.
Also, on the sample repo, there's some "magic" like Application.instanceCache[CatDAO] or app2dao(app).
Can anybody point me at some full fledged example of or repo that deals correctly with testing play and slick?
Thanks.
I agree it's confusing. I don't know if this is the best solution, but I ended up having a separate configuration file test.conf that specifies an in-memory database:
slick.dbs {
default {
driver = "slick.driver.H2Driver$"
db.driver = "org.h2.Driver"
db.url = "jdbc:h2:mem:play-test"
}
}
and then told sbt to use this when running the tests:
[..] javaOptions in Test ++= Seq("-Dconfig.file=conf/test.conf")

Insert row with auto increment id - Playframework Scala - Slick

One of my evolution contains a very simple table definition with BIGSERIAL column:
CREATE TABLE product (
id BIGSERIAL NOT NULL PRIMARY KEY,
name VARCHAR NOT NULL,
color VARCHAR NOT NULL
);
I use SlickCodeGenerator to generate classes from the database itself:
case class ProductRow(id: Long, name: String, color: String)
implicit def GetResultProductRow(implicit e0: GR[Long], e1: GR[String]): GR[ProductRow] = GR{
prs => import prs._
ProductRow.tupled((<<[Long], <<[String], <<[String]))
}
class Product(_tableTag: Tag) extends Table[ProductRow](_tableTag, "product") {
def * = (id, name, color) <> (ProductRow.tupled, ProductRow.unapply)
def ? = (Rep.Some(id), Rep.Some(name), Rep.Some(color)).shaped.<>({r=>import r._; _1.map(_=> ProductRow.tupled((_1.get, _2.get, _3.get)))}, (_:Any) => throw new Exception("Inserting into ? projection not supported."))
val id: Rep[Long] = column[Long]("id", O.AutoInc, O.PrimaryKey)
val name: Rep[String] = column[String]("name")
val color: Rep[String] = column[String]("color")
}
lazy val Product = new TableQuery(tag => new Product(tag))
I would like to insert a row to Product table without an id so it would be generated by database. The problem is ProductRow.id is not optional. Most of the solutions suggest adding new methods to Product class, however I must not touch sources generated by slick as the changes could be lost any time. Is there a way to insert the row without an ID using such generated files?
I use Slick 3.0, Playframework 2.4.1.
Edit: Sending dummy id solves the issue, however it is redundant over the wire. I am looking for something like: insert into product(name, color) values('name', 'color').
you can send any value as id in productRow, if its bigserial...it should be replaced by the autogenerated value...and even if you dont want to send id then create one case class without id having same fields as product to send :
case class ProductSimilar(name: String, color: String)
val prodSimilar =ProductSimilar("name","color")
before inserting copy the fields except the id field and you can insert to database :
val db: PostgresDriver.backend.DatabaseDef = Database.forURL(url, user=user, password=password, driver= jdbcDriver)
val row = Product(0L ,prodSimilar.name,prodSimilar.color)
db.run(query returning query.map(obj => obj) += row)
Hope this is helpful...

Slick 2: Delete row in slick with play framework

This requirement should be really easy, but I don't know why is not working. I want to delete a row based on it's id using slick with play framework.
I'm following this example from play-slick module, but compiler complains that value delete is not a member of scala.slick.lifted.Query[models.Tables.MyEntity,models.Tables.MyEntity#TableElementType].
My controller looks like:
def delete(id: Int) = DBAction{ implicit rs =>
val q = MyEntity.where(_.id === id)
q.delete
Ok("Entity deleted")
}
I've imported the play.api.db.slick.Config.driver.simple._
What am I doing wrong?
Edit:
My schema definition looks like:
class Cities(tag: Tag) extends Table[CityRow](tag, "cities") {
def * = (cod, name, state, lat, long, id) <> (CityRow.tupled, CityRow.unapply)
/** Maps whole row to an option. Useful for outer joins. */
def ? = (cod.?, name.?, state.?, lat, long, id.?).shaped.<>({r=>import r._; _1.map(_=> CityRow.tupled((_1.get, _2.get, _3.get, _4, _5, _6.get)))}, (_:Any) => throw new Exception("Inserting into ? projection not supported."))
val cod: Column[String] = column[String]("cod")
val name: Column[String] = column[String]("name")
val state: Column[Int] = column[Int]("state")
val lat: Column[Option[String]] = column[Option[String]]("lat")
val long: Column[Option[String]] = column[Option[String]]("long")
val id: Column[Int] = column[Int]("id", O.AutoInc, O.PrimaryKey)
/** Foreign key referencing Departamentos (database name fk_ciudades_departamentos1) */
lazy val stateFk = foreignKey("fk_cities_states", state, States)(r => r.idstate, onUpdate=ForeignKeyAction.NoAction, onDelete=ForeignKeyAction.NoAction)
}
I also had a look at that example some time ago and it looked wrong to me too, not sure wether I was doing something wrong myself or not, the delete function was always a bit tricky to get right, expecially using the lifted.Query (like you are doing). Later in the process I made it work importing the right drivers, in my case scala.slick.driver.PostgresDriver.simple._.
Edit after comment:
Probably you have an error in the shape function, hard to say without looking at your schema declaration. This is an example:
case class CityRow(id: Long, name: String) {
class City(tag: Tag) extends Table[CityRow](tag, "city") {
def * = (id, name) <>(CityRow.tupled, CityRow.unapply)
^this is the shape function.
def ? = (id.?, name).shaped.<>({
r => import r._
_1.map(_ => CityRow.tupled((_1.get, _2)))
}, (_: Any) => throw new Exception("Inserting into ? projection not supported."))
val id: Column[Long] = column[Long]("id", O.AutoInc, O.PrimaryKey)
val name: Column[String] = column[String]("name")
}